Test Report: Docker_Linux_crio_arm64 22000

                    
                      3f3a61283993ee602bd323c44b704727ac3a4ece:2025-11-29:42558
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.67
35 TestAddons/parallel/Registry 15.98
36 TestAddons/parallel/RegistryCreds 0.48
37 TestAddons/parallel/Ingress 145.01
38 TestAddons/parallel/InspektorGadget 5.27
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 55.5
42 TestAddons/parallel/Headlamp 3.07
43 TestAddons/parallel/CloudSpanner 6.29
44 TestAddons/parallel/LocalPath 8.42
45 TestAddons/parallel/NvidiaDevicePlugin 6.39
46 TestAddons/parallel/Yakd 6.31
97 TestFunctional/parallel/ServiceCmdConnect 603.6
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.95
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
135 TestFunctional/parallel/ServiceCmd/Format 0.61
136 TestFunctional/parallel/ServiceCmd/URL 0.47
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.06
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.24
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.27
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
191 TestJSONOutput/pause/Command 1.65
197 TestJSONOutput/unpause/Command 1.94
282 TestPause/serial/Pause 6.85
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.59
304 TestStartStop/group/old-k8s-version/serial/Pause 6.44
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.49
317 TestStartStop/group/embed-certs/serial/Pause 7.69
321 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.9
329 TestStartStop/group/no-preload/serial/Pause 7.03
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.37
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.34
345 TestStartStop/group/newest-cni/serial/Pause 5.79
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.48
x
+
TestAddons/serial/Volcano (0.67s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable volcano --alsologtostderr -v=1: exit status 11 (671.500485ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:22.952695  308888 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:22.955012  308888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:22.955040  308888 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:22.955048  308888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:22.955336  308888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:22.955653  308888 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:22.956054  308888 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:22.956084  308888 addons.go:622] checking whether the cluster is paused
	I1129 09:18:22.956197  308888 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:22.956210  308888 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:22.956725  308888 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:22.975990  308888 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:22.976047  308888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:22.995374  308888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:23.104585  308888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:23.104675  308888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:23.136299  308888 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:23.136320  308888 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:23.136325  308888 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:23.136328  308888 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:23.136332  308888 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:23.136337  308888 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:23.136340  308888 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:23.136344  308888 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:23.136347  308888 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:23.136354  308888 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:23.136359  308888 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:23.136365  308888 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:23.136373  308888 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:23.136377  308888 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:23.136383  308888 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:23.136391  308888 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:23.136398  308888 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:23.136402  308888 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:23.136405  308888 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:23.136408  308888 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:23.136414  308888 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:23.136430  308888 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:23.136433  308888 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:23.136436  308888 cri.go:89] found id: ""
	I1129 09:18:23.136494  308888 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:23.152874  308888 out.go:203] 
	W1129 09:18:23.155856  308888 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:23.155885  308888 out.go:285] * 
	* 
	W1129 09:18:23.526927  308888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:23.530031  308888 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.67s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.993886ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003372082s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003202935s
addons_test.go:392: (dbg) Run:  kubectl --context addons-937561 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-937561 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-937561 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.367448499s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 ip
2025/11/29 09:18:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable registry --alsologtostderr -v=1: exit status 11 (309.655932ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:49.585793  309852 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:49.589169  309852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:49.589227  309852 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:49.589249  309852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:49.589558  309852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:49.589924  309852 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:49.590397  309852 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:49.590436  309852 addons.go:622] checking whether the cluster is paused
	I1129 09:18:49.590567  309852 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:49.590593  309852 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:49.591322  309852 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:49.611617  309852 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:49.611669  309852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:49.634331  309852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:49.740757  309852 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:49.740851  309852 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:49.779380  309852 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:49.779404  309852 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:49.779412  309852 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:49.779416  309852 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:49.779419  309852 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:49.779423  309852 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:49.779425  309852 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:49.779428  309852 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:49.779431  309852 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:49.779437  309852 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:49.779440  309852 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:49.779443  309852 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:49.779446  309852 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:49.779449  309852 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:49.779452  309852 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:49.779456  309852 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:49.779460  309852 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:49.779463  309852 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:49.779466  309852 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:49.779469  309852 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:49.779473  309852 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:49.779476  309852 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:49.779479  309852 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:49.779481  309852 cri.go:89] found id: ""
	I1129 09:18:49.779529  309852 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:49.798408  309852 out.go:203] 
	W1129 09:18:49.801285  309852 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:49.801309  309852 out.go:285] * 
	* 
	W1129 09:18:49.808502  309852 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:49.813109  309852 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.98s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.060043ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-937561
addons_test.go:332: (dbg) Run:  kubectl --context addons-937561 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (261.306385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:19:59.071086  311564 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:19:59.072811  311564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:59.072828  311564 out.go:374] Setting ErrFile to fd 2...
	I1129 09:19:59.072835  311564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:59.073084  311564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:19:59.073393  311564 mustload.go:66] Loading cluster: addons-937561
	I1129 09:19:59.074883  311564 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:59.074908  311564 addons.go:622] checking whether the cluster is paused
	I1129 09:19:59.075031  311564 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:59.075048  311564 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:19:59.075552  311564 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:19:59.092004  311564 ssh_runner.go:195] Run: systemctl --version
	I1129 09:19:59.092065  311564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:19:59.109682  311564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:19:59.220789  311564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:19:59.220893  311564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:19:59.255062  311564 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:19:59.255087  311564 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:19:59.255097  311564 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:19:59.255101  311564 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:19:59.255104  311564 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:19:59.255108  311564 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:19:59.255111  311564 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:19:59.255113  311564 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:19:59.255117  311564 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:19:59.255127  311564 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:19:59.255131  311564 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:19:59.255134  311564 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:19:59.255137  311564 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:19:59.255140  311564 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:19:59.255143  311564 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:19:59.255148  311564 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:19:59.255155  311564 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:19:59.255159  311564 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:19:59.255162  311564 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:19:59.255165  311564 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:19:59.255170  311564 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:19:59.255173  311564 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:19:59.255175  311564 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:19:59.255179  311564 cri.go:89] found id: ""
	I1129 09:19:59.255236  311564 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:19:59.269678  311564 out.go:203] 
	W1129 09:19:59.272569  311564 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:19:59.272590  311564 out.go:285] * 
	* 
	W1129 09:19:59.279191  311564 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:19:59.282150  311564 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-937561 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-937561 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-937561 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [76b58813-e684-4444-94b9-e78ffc677016] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [76b58813-e684-4444-94b9-e78ffc677016] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002906269s
I1129 09:19:12.118449  302182 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.134404469s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-937561 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-937561
helpers_test.go:243: (dbg) docker inspect addons-937561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f",
	        "Created": "2025-11-29T09:15:56.838859923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:15:56.897017523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f-json.log",
	        "Name": "/addons-937561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-937561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-937561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f",
	                "LowerDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-937561",
	                "Source": "/var/lib/docker/volumes/addons-937561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-937561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-937561",
	                "name.minikube.sigs.k8s.io": "addons-937561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a27dc52507f57398d147fcfa5124c353acbc3c332b2bc79354c09e1567200156",
	            "SandboxKey": "/var/run/docker/netns/a27dc52507f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-937561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:c1:ed:d0:3f:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "52934f66aab693c83ce51ab1a5dca17dee70ef0f2d4c5842285e8c8d9c8754bd",
	                    "EndpointID": "ef0cc4a9f52e9e4d652212f30657b75719c3a5dff085e2275aae9fb77e1aafd6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-937561",
	                        "ff16db5210e7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-937561 -n addons-937561
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-937561 logs -n 25: (1.501101378s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-753424                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-753424 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ --download-only -p binary-mirror-549171 --alsologtostderr --binary-mirror http://127.0.0.1:40279 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-549171   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ -p binary-mirror-549171                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-549171   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ addons  │ enable dashboard -p addons-937561                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-937561                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ start   │ -p addons-937561 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:18 UTC │
	│ addons  │ addons-937561 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ addons-937561 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ enable headlamp -p addons-937561 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ addons-937561 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ addons-937561 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ addons-937561 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ ip      │ addons-937561 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ addons  │ addons-937561 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ addons-937561 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ ssh     │ addons-937561 ssh cat /opt/local-path-provisioner/pvc-e16cf624-9fea-4565-93bd-22ce2cfea277_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ addons  │ addons-937561 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ addons-937561 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ ssh     │ addons-937561 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ addons  │ addons-937561 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ addons  │ addons-937561 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ addons  │ addons-937561 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-937561                                                                                                                                                                                                                                                                                                                                                                                           │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ addons  │ addons-937561 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ ip      │ addons-937561 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:15:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:15:31.087094  302940 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:15:31.087226  302940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:31.087232  302940 out.go:374] Setting ErrFile to fd 2...
	I1129 09:15:31.087237  302940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:31.087504  302940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:15:31.087954  302940 out.go:368] Setting JSON to false
	I1129 09:15:31.088761  302940 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7080,"bootTime":1764400651,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 09:15:31.088836  302940 start.go:143] virtualization:  
	I1129 09:15:31.092210  302940 out.go:179] * [addons-937561] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:15:31.096145  302940 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:15:31.096277  302940 notify.go:221] Checking for updates...
	I1129 09:15:31.102111  302940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:15:31.105097  302940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:15:31.107935  302940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 09:15:31.110919  302940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:15:31.113798  302940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:15:31.117041  302940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:15:31.150923  302940 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:15:31.151062  302940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:31.211003  302940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-29 09:15:31.202097401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:31.211112  302940 docker.go:319] overlay module found
	I1129 09:15:31.214320  302940 out.go:179] * Using the docker driver based on user configuration
	I1129 09:15:31.217102  302940 start.go:309] selected driver: docker
	I1129 09:15:31.217122  302940 start.go:927] validating driver "docker" against <nil>
	I1129 09:15:31.217135  302940 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:15:31.217862  302940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:31.280116  302940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-29 09:15:31.271255155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:31.280279  302940 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:15:31.280502  302940 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:15:31.283467  302940 out.go:179] * Using Docker driver with root privileges
	I1129 09:15:31.286304  302940 cni.go:84] Creating CNI manager for ""
	I1129 09:15:31.286380  302940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:15:31.286393  302940 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:15:31.286481  302940 start.go:353] cluster config:
	{Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1129 09:15:31.291461  302940 out.go:179] * Starting "addons-937561" primary control-plane node in "addons-937561" cluster
	I1129 09:15:31.294241  302940 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:15:31.297285  302940 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:15:31.300142  302940 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:31.300194  302940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 09:15:31.300204  302940 cache.go:65] Caching tarball of preloaded images
	I1129 09:15:31.300226  302940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:15:31.300304  302940 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 09:15:31.300316  302940 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:15:31.300669  302940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/config.json ...
	I1129 09:15:31.300704  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/config.json: {Name:mk4be157a7892880b738be8e763cf0724c47d991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:15:31.315940  302940 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 09:15:31.316068  302940 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 09:15:31.316086  302940 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1129 09:15:31.316090  302940 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1129 09:15:31.316097  302940 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1129 09:15:31.316102  302940 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1129 09:15:49.494835  302940 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1129 09:15:49.494880  302940 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:15:49.494934  302940 start.go:360] acquireMachinesLock for addons-937561: {Name:mk9fc399e1321a9643dc794a9b0f9e90e1914dc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:15:49.495681  302940 start.go:364] duration metric: took 724.358µs to acquireMachinesLock for "addons-937561"
	I1129 09:15:49.495721  302940 start.go:93] Provisioning new machine with config: &{Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:15:49.495800  302940 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:15:49.499115  302940 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1129 09:15:49.499362  302940 start.go:159] libmachine.API.Create for "addons-937561" (driver="docker")
	I1129 09:15:49.499401  302940 client.go:173] LocalClient.Create starting
	I1129 09:15:49.499514  302940 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 09:15:49.764545  302940 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 09:15:49.959033  302940 cli_runner.go:164] Run: docker network inspect addons-937561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:15:49.973756  302940 cli_runner.go:211] docker network inspect addons-937561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:15:49.973837  302940 network_create.go:284] running [docker network inspect addons-937561] to gather additional debugging logs...
	I1129 09:15:49.973859  302940 cli_runner.go:164] Run: docker network inspect addons-937561
	W1129 09:15:49.989712  302940 cli_runner.go:211] docker network inspect addons-937561 returned with exit code 1
	I1129 09:15:49.989744  302940 network_create.go:287] error running [docker network inspect addons-937561]: docker network inspect addons-937561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-937561 not found
	I1129 09:15:49.989758  302940 network_create.go:289] output of [docker network inspect addons-937561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-937561 not found
	
	** /stderr **
	I1129 09:15:49.989851  302940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:15:50.005515  302940 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a74000}
	I1129 09:15:50.005559  302940 network_create.go:124] attempt to create docker network addons-937561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1129 09:15:50.005620  302940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-937561 addons-937561
	I1129 09:15:50.073094  302940 network_create.go:108] docker network addons-937561 192.168.49.0/24 created
	I1129 09:15:50.073129  302940 kic.go:121] calculated static IP "192.168.49.2" for the "addons-937561" container
	I1129 09:15:50.073210  302940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:15:50.090005  302940 cli_runner.go:164] Run: docker volume create addons-937561 --label name.minikube.sigs.k8s.io=addons-937561 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:15:50.109089  302940 oci.go:103] Successfully created a docker volume addons-937561
	I1129 09:15:50.109184  302940 cli_runner.go:164] Run: docker run --rm --name addons-937561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-937561 --entrypoint /usr/bin/test -v addons-937561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:15:52.348789  302940 cli_runner.go:217] Completed: docker run --rm --name addons-937561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-937561 --entrypoint /usr/bin/test -v addons-937561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.239554653s)
	I1129 09:15:52.348822  302940 oci.go:107] Successfully prepared a docker volume addons-937561
	I1129 09:15:52.348862  302940 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:52.348880  302940 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:15:52.348951  302940 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-937561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:15:56.765350  302940 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-937561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.41635727s)
	I1129 09:15:56.765385  302940 kic.go:203] duration metric: took 4.416501173s to extract preloaded images to volume ...
	W1129 09:15:56.765532  302940 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 09:15:56.765655  302940 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:15:56.824268  302940 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-937561 --name addons-937561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-937561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-937561 --network addons-937561 --ip 192.168.49.2 --volume addons-937561:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:15:57.129147  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Running}}
	I1129 09:15:57.156908  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:15:57.176840  302940 cli_runner.go:164] Run: docker exec addons-937561 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:15:57.241573  302940 oci.go:144] the created container "addons-937561" has a running status.
	I1129 09:15:57.241601  302940 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa...
	I1129 09:15:57.472838  302940 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:15:57.498387  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:15:57.531990  302940 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:15:57.532015  302940 kic_runner.go:114] Args: [docker exec --privileged addons-937561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:15:57.604324  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:15:57.628638  302940 machine.go:94] provisionDockerMachine start ...
	I1129 09:15:57.628736  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:15:57.646554  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:57.646879  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:15:57.646889  302940 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:15:57.647522  302940 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:16:00.797501  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-937561
	
	I1129 09:16:00.797523  302940 ubuntu.go:182] provisioning hostname "addons-937561"
	I1129 09:16:00.797587  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:00.815422  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:00.815742  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:16:00.815757  302940 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-937561 && echo "addons-937561" | sudo tee /etc/hostname
	I1129 09:16:00.977309  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-937561
	
	I1129 09:16:00.977462  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:00.993782  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:00.994277  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:16:00.994308  302940 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-937561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-937561/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-937561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:16:01.149184  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:16:01.149260  302940 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 09:16:01.149332  302940 ubuntu.go:190] setting up certificates
	I1129 09:16:01.149361  302940 provision.go:84] configureAuth start
	I1129 09:16:01.149433  302940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-937561
	I1129 09:16:01.167373  302940 provision.go:143] copyHostCerts
	I1129 09:16:01.167469  302940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 09:16:01.167618  302940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 09:16:01.167682  302940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 09:16:01.167736  302940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.addons-937561 san=[127.0.0.1 192.168.49.2 addons-937561 localhost minikube]
	I1129 09:16:01.451675  302940 provision.go:177] copyRemoteCerts
	I1129 09:16:01.451742  302940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:16:01.451785  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:01.470989  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:01.577793  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 09:16:01.595280  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:16:01.612815  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:16:01.631214  302940 provision.go:87] duration metric: took 481.823107ms to configureAuth
	I1129 09:16:01.631281  302940 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:16:01.631500  302940 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:01.631614  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:01.648958  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:01.649271  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:16:01.649291  302940 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:16:01.953439  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:16:01.953462  302940 machine.go:97] duration metric: took 4.324804462s to provisionDockerMachine
	I1129 09:16:01.953472  302940 client.go:176] duration metric: took 12.45406039s to LocalClient.Create
	I1129 09:16:01.953485  302940 start.go:167] duration metric: took 12.454124622s to libmachine.API.Create "addons-937561"
	I1129 09:16:01.953492  302940 start.go:293] postStartSetup for "addons-937561" (driver="docker")
	I1129 09:16:01.953505  302940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:16:01.953579  302940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:16:01.953624  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:01.971246  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.078464  302940 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:16:02.081991  302940 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:16:02.082023  302940 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:16:02.082035  302940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 09:16:02.082115  302940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 09:16:02.082147  302940 start.go:296] duration metric: took 128.646046ms for postStartSetup
	I1129 09:16:02.082471  302940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-937561
	I1129 09:16:02.099306  302940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/config.json ...
	I1129 09:16:02.099596  302940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:16:02.099654  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:02.116127  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.219859  302940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:16:02.225381  302940 start.go:128] duration metric: took 12.729565135s to createHost
	I1129 09:16:02.225407  302940 start.go:83] releasing machines lock for "addons-937561", held for 12.729707348s
	I1129 09:16:02.225478  302940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-937561
	I1129 09:16:02.242408  302940 ssh_runner.go:195] Run: cat /version.json
	I1129 09:16:02.242464  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:02.242478  302940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:16:02.242543  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:02.266297  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.270367  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.459603  302940 ssh_runner.go:195] Run: systemctl --version
	I1129 09:16:02.465802  302940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:16:02.499964  302940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:16:02.504393  302940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:16:02.504485  302940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:16:02.532984  302940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 09:16:02.533061  302940 start.go:496] detecting cgroup driver to use...
	I1129 09:16:02.533110  302940 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:16:02.533188  302940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:16:02.549698  302940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:16:02.562266  302940 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:16:02.562370  302940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:16:02.579832  302940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:16:02.598229  302940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:16:02.716223  302940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:16:02.834559  302940 docker.go:234] disabling docker service ...
	I1129 09:16:02.834626  302940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:16:02.855106  302940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:16:02.868601  302940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:16:02.982745  302940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:16:03.100636  302940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:16:03.113082  302940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:16:03.127676  302940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:16:03.127754  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.136537  302940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 09:16:03.136660  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.146027  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.154870  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.164743  302940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:16:03.173081  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.182214  302940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.195425  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.204090  302940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:16:03.211718  302940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:16:03.218771  302940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:03.327937  302940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:16:03.484824  302940 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:16:03.484938  302940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:16:03.488701  302940 start.go:564] Will wait 60s for crictl version
	I1129 09:16:03.488813  302940 ssh_runner.go:195] Run: which crictl
	I1129 09:16:03.492292  302940 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:16:03.516250  302940 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:16:03.516390  302940 ssh_runner.go:195] Run: crio --version
	I1129 09:16:03.544947  302940 ssh_runner.go:195] Run: crio --version
	I1129 09:16:03.577724  302940 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:16:03.580551  302940 cli_runner.go:164] Run: docker network inspect addons-937561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:16:03.599298  302940 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1129 09:16:03.603010  302940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:03.612290  302940 kubeadm.go:884] updating cluster {Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:16:03.612416  302940 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:03.612470  302940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:03.648729  302940 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:03.648753  302940 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:16:03.648812  302940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:03.673844  302940 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:03.673868  302940 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:16:03.673877  302940 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1129 09:16:03.673966  302940 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-937561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:16:03.674050  302940 ssh_runner.go:195] Run: crio config
	I1129 09:16:03.736391  302940 cni.go:84] Creating CNI manager for ""
	I1129 09:16:03.736410  302940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:03.736427  302940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:16:03.736450  302940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-937561 NodeName:addons-937561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:16:03.736569  302940 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-937561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:16:03.736642  302940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:16:03.744617  302940 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:16:03.744731  302940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:16:03.752208  302940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1129 09:16:03.765946  302940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:16:03.779464  302940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1129 09:16:03.791502  302940 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:16:03.794832  302940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:03.803809  302940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:03.917198  302940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:03.933103  302940 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561 for IP: 192.168.49.2
	I1129 09:16:03.933121  302940 certs.go:195] generating shared ca certs ...
	I1129 09:16:03.933137  302940 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:03.933955  302940 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 09:16:04.847330  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt ...
	I1129 09:16:04.847364  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt: {Name:mkac8d45d81f8728bae19fa79b1cb3f9b39b4bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:04.847599  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key ...
	I1129 09:16:04.847615  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key: {Name:mk9e27192e1fe89020239cee41fe7012ed7e494c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:04.847708  302940 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 09:16:05.155609  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt ...
	I1129 09:16:05.155642  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt: {Name:mkc058cc1db8a6826bb5a0bc0daef7850cfba061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.156429  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key ...
	I1129 09:16:05.156444  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key: {Name:mk376367515faf0510b70b573b593c791268b6cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.156534  302940 certs.go:257] generating profile certs ...
	I1129 09:16:05.156595  302940 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.key
	I1129 09:16:05.156611  302940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt with IP's: []
	I1129 09:16:05.421662  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt ...
	I1129 09:16:05.421695  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: {Name:mkc3397fa5a25a24bd5f51f2c5c4a606cc819664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.422577  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.key ...
	I1129 09:16:05.422597  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.key: {Name:mk4a9c073dc4557d5df42b1ae8c957dd5d02abb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.423354  302940 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb
	I1129 09:16:05.423384  302940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1129 09:16:05.605977  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb ...
	I1129 09:16:05.606011  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb: {Name:mk9b0c8d9e99cbe481159be628ca5b19b1897710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.606881  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb ...
	I1129 09:16:05.606908  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb: {Name:mk929b22cc4c62d9796b35083fc8d767ea3156c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.607616  302940 certs.go:382] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt
	I1129 09:16:05.607730  302940 certs.go:386] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key
	I1129 09:16:05.607824  302940 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key
	I1129 09:16:05.607873  302940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt with IP's: []
	I1129 09:16:05.761686  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt ...
	I1129 09:16:05.761722  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt: {Name:mk4eae6adbea52836e2a038870ccc1ea957c14a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.761894  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key ...
	I1129 09:16:05.761907  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key: {Name:mk39a6f1248b763f4c4ffd9fea8461ec3e28fcea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.762121  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:16:05.762167  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:16:05.762197  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:16:05.762234  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 09:16:05.762788  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:16:05.783085  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:16:05.801923  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:16:05.819929  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:16:05.837433  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 09:16:05.853966  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:16:05.871709  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:16:05.888828  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:16:05.906601  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:16:05.924358  302940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:16:05.937889  302940 ssh_runner.go:195] Run: openssl version
	I1129 09:16:05.944268  302940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:16:05.952902  302940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.956693  302940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.956761  302940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.997704  302940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:16:06.013949  302940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:16:06.018518  302940 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:16:06.018574  302940 kubeadm.go:401] StartCluster: {Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:06.018661  302940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:16:06.018735  302940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:16:06.047308  302940 cri.go:89] found id: ""
	I1129 09:16:06.047403  302940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:16:06.055714  302940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:16:06.063755  302940 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:16:06.063825  302940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:16:06.071749  302940 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:16:06.071770  302940 kubeadm.go:158] found existing configuration files:
	
	I1129 09:16:06.071846  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:16:06.079810  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:16:06.079929  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:16:06.087791  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:16:06.096102  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:16:06.096178  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:16:06.104109  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:16:06.111989  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:16:06.112055  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:16:06.119301  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:16:06.126933  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:16:06.126999  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:16:06.134250  302940 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:16:06.182497  302940 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:16:06.182813  302940 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:16:06.207474  302940 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:16:06.207551  302940 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 09:16:06.207590  302940 kubeadm.go:319] OS: Linux
	I1129 09:16:06.207638  302940 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:16:06.207687  302940 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 09:16:06.207736  302940 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:16:06.207786  302940 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:16:06.207836  302940 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:16:06.207885  302940 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:16:06.207946  302940 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:16:06.207996  302940 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:16:06.208044  302940 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 09:16:06.276367  302940 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:16:06.276476  302940 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:16:06.276566  302940 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:16:06.285910  302940 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:16:06.289141  302940 out.go:252]   - Generating certificates and keys ...
	I1129 09:16:06.289239  302940 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:16:06.289310  302940 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:16:06.434478  302940 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:16:06.844154  302940 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:16:07.548481  302940 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:16:07.891785  302940 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:16:08.917297  302940 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:16:08.917696  302940 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-937561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 09:16:09.576119  302940 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:16:09.576449  302940 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-937561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 09:16:10.782564  302940 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:16:10.994698  302940 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:16:11.120656  302940 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:16:11.120930  302940 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:16:11.274092  302940 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:16:12.696326  302940 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:16:12.847704  302940 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:16:13.108716  302940 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:16:13.936240  302940 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:16:13.936813  302940 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:16:13.940123  302940 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:16:13.943536  302940 out.go:252]   - Booting up control plane ...
	I1129 09:16:13.943640  302940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:16:13.943721  302940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:16:13.944847  302940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:16:13.960435  302940 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:16:13.960635  302940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:16:13.968503  302940 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:16:13.968858  302940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:16:13.969100  302940 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:16:14.098382  302940 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:16:14.098518  302940 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:16:15.600331  302940 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502571416s
	I1129 09:16:15.604486  302940 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:16:15.604593  302940 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1129 09:16:15.604695  302940 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:16:15.604799  302940 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:16:18.258292  302940 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.65331495s
	I1129 09:16:21.363125  302940 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.758636574s
	I1129 09:16:21.607867  302940 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003192759s
	I1129 09:16:21.626959  302940 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:16:21.641981  302940 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:16:21.654701  302940 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:16:21.654908  302940 kubeadm.go:319] [mark-control-plane] Marking the node addons-937561 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:16:21.668336  302940 kubeadm.go:319] [bootstrap-token] Using token: h33wha.0mtwavoxaivfe568
	I1129 09:16:21.673356  302940 out.go:252]   - Configuring RBAC rules ...
	I1129 09:16:21.673488  302940 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:16:21.675523  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:16:21.683570  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:16:21.687703  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:16:21.691721  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:16:21.697840  302940 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:16:22.015548  302940 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:16:22.467145  302940 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:16:23.015241  302940 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:16:23.016427  302940 kubeadm.go:319] 
	I1129 09:16:23.016508  302940 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:16:23.016518  302940 kubeadm.go:319] 
	I1129 09:16:23.016596  302940 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:16:23.016604  302940 kubeadm.go:319] 
	I1129 09:16:23.016630  302940 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:16:23.016692  302940 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:16:23.016758  302940 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:16:23.016767  302940 kubeadm.go:319] 
	I1129 09:16:23.016821  302940 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:16:23.016829  302940 kubeadm.go:319] 
	I1129 09:16:23.016877  302940 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:16:23.016885  302940 kubeadm.go:319] 
	I1129 09:16:23.016937  302940 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:16:23.017016  302940 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:16:23.017089  302940 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:16:23.017098  302940 kubeadm.go:319] 
	I1129 09:16:23.017183  302940 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:16:23.017268  302940 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:16:23.017275  302940 kubeadm.go:319] 
	I1129 09:16:23.017359  302940 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h33wha.0mtwavoxaivfe568 \
	I1129 09:16:23.017473  302940 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 09:16:23.017498  302940 kubeadm.go:319] 	--control-plane 
	I1129 09:16:23.017506  302940 kubeadm.go:319] 
	I1129 09:16:23.017591  302940 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:16:23.017599  302940 kubeadm.go:319] 
	I1129 09:16:23.017681  302940 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h33wha.0mtwavoxaivfe568 \
	I1129 09:16:23.017790  302940 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 09:16:23.020649  302940 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 09:16:23.020878  302940 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 09:16:23.020987  302940 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:16:23.021005  302940 cni.go:84] Creating CNI manager for ""
	I1129 09:16:23.021013  302940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:23.024258  302940 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:16:23.027176  302940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:16:23.031176  302940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:16:23.031196  302940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:16:23.044969  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:16:23.324199  302940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:16:23.324318  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:23.324343  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-937561 minikube.k8s.io/updated_at=2025_11_29T09_16_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=addons-937561 minikube.k8s.io/primary=true
	I1129 09:16:23.470616  302940 ops.go:34] apiserver oom_adj: -16
	I1129 09:16:23.470741  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:23.971402  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:24.470917  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:24.970888  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:25.471759  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:25.971293  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:26.470976  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:26.970928  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:27.471002  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:27.971048  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:28.471307  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:28.580427  302940 kubeadm.go:1114] duration metric: took 5.256229152s to wait for elevateKubeSystemPrivileges
	I1129 09:16:28.580462  302940 kubeadm.go:403] duration metric: took 22.56189219s to StartCluster
	I1129 09:16:28.580479  302940 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:28.580588  302940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:16:28.580985  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:28.581852  302940 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:16:28.581994  302940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:16:28.582261  302940 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:28.582305  302940 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1129 09:16:28.582403  302940 addons.go:70] Setting yakd=true in profile "addons-937561"
	I1129 09:16:28.582424  302940 addons.go:239] Setting addon yakd=true in "addons-937561"
	I1129 09:16:28.582451  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.582949  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.583405  302940 addons.go:70] Setting metrics-server=true in profile "addons-937561"
	I1129 09:16:28.583424  302940 addons.go:239] Setting addon metrics-server=true in "addons-937561"
	I1129 09:16:28.583447  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.583456  302940 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-937561"
	I1129 09:16:28.583472  302940 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-937561"
	I1129 09:16:28.583493  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.583879  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.583899  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586170  302940 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-937561"
	I1129 09:16:28.586870  302940 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-937561"
	I1129 09:16:28.586911  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.587362  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586763  302940 addons.go:70] Setting registry=true in profile "addons-937561"
	I1129 09:16:28.589556  302940 addons.go:239] Setting addon registry=true in "addons-937561"
	I1129 09:16:28.589676  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.590843  302940 addons.go:70] Setting cloud-spanner=true in profile "addons-937561"
	I1129 09:16:28.596423  302940 addons.go:239] Setting addon cloud-spanner=true in "addons-937561"
	I1129 09:16:28.596524  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.596843  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.597072  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586788  302940 addons.go:70] Setting registry-creds=true in profile "addons-937561"
	I1129 09:16:28.608286  302940 addons.go:239] Setting addon registry-creds=true in "addons-937561"
	I1129 09:16:28.608330  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.586801  302940 addons.go:70] Setting storage-provisioner=true in profile "addons-937561"
	I1129 09:16:28.608579  302940 addons.go:239] Setting addon storage-provisioner=true in "addons-937561"
	I1129 09:16:28.608603  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.591004  302940 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-937561"
	I1129 09:16:28.608725  302940 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-937561"
	I1129 09:16:28.608767  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.609280  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.611691  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586815  302940 addons.go:70] Setting volcano=true in profile "addons-937561"
	I1129 09:16:28.618840  302940 addons.go:239] Setting addon volcano=true in "addons-937561"
	I1129 09:16:28.618877  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.619349  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586809  302940 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-937561"
	I1129 09:16:28.619587  302940 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-937561"
	I1129 09:16:28.620732  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586820  302940 addons.go:70] Setting volumesnapshots=true in profile "addons-937561"
	I1129 09:16:28.641362  302940 addons.go:239] Setting addon volumesnapshots=true in "addons-937561"
	I1129 09:16:28.641402  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.641903  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591017  302940 addons.go:70] Setting default-storageclass=true in profile "addons-937561"
	I1129 09:16:28.648719  302940 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-937561"
	I1129 09:16:28.649147  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591029  302940 addons.go:70] Setting ingress=true in profile "addons-937561"
	I1129 09:16:28.678635  302940 addons.go:239] Setting addon ingress=true in "addons-937561"
	I1129 09:16:28.678687  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.679283  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591024  302940 addons.go:70] Setting gcp-auth=true in profile "addons-937561"
	I1129 09:16:28.686362  302940 mustload.go:66] Loading cluster: addons-937561
	I1129 09:16:28.686576  302940 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:28.686828  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591047  302940 addons.go:70] Setting ingress-dns=true in profile "addons-937561"
	I1129 09:16:28.713043  302940 addons.go:239] Setting addon ingress-dns=true in "addons-937561"
	I1129 09:16:28.713092  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.713583  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591075  302940 addons.go:70] Setting inspektor-gadget=true in profile "addons-937561"
	I1129 09:16:28.718658  302940 addons.go:239] Setting addon inspektor-gadget=true in "addons-937561"
	I1129 09:16:28.718701  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.719188  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.749860  302940 out.go:179] * Verifying Kubernetes components...
	I1129 09:16:28.753985  302940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:28.758607  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.782622  302940 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1129 09:16:28.802747  302940 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1129 09:16:28.808534  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1129 09:16:28.808603  302940 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1129 09:16:28.808706  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.816781  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1129 09:16:28.858545  302940 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:16:28.862351  302940 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1129 09:16:28.862989  302940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:16:28.863027  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:16:28.863130  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.865853  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 09:16:28.865878  302940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 09:16:28.866047  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.885585  302940 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1129 09:16:28.886381  302940 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1129 09:16:28.888794  302940 out.go:179]   - Using image docker.io/registry:3.0.0
	I1129 09:16:28.888961  302940 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 09:16:28.889003  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1129 09:16:28.889093  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.894750  302940 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1129 09:16:28.894774  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1129 09:16:28.894842  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.907397  302940 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 09:16:28.907417  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1129 09:16:28.907480  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.945194  302940 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-937561"
	I1129 09:16:28.945239  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.945762  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.950027  302940 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1129 09:16:28.951239  302940 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1129 09:16:28.962583  302940 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1129 09:16:28.968065  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.970944  302940 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1129 09:16:28.970964  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1129 09:16:28.971058  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.971513  302940 addons.go:239] Setting addon default-storageclass=true in "addons-937561"
	I1129 09:16:28.971568  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.972006  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.967590  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1129 09:16:28.967620  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 09:16:29.006270  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1129 09:16:28.967782  302940 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1129 09:16:29.014378  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1129 09:16:29.014453  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.014235  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1129 09:16:29.036648  302940 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1129 09:16:29.036852  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1129 09:16:29.036876  302940 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1129 09:16:29.036955  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.065759  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1129 09:16:29.069810  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1129 09:16:29.072749  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1129 09:16:29.076550  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1129 09:16:29.084001  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.088522  302940 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1129 09:16:29.089644  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 09:16:29.091519  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1129 09:16:29.089904  302940 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 09:16:29.091584  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1129 09:16:29.091659  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.089951  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.092658  302940 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 09:16:29.092692  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1129 09:16:29.092744  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.114979  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1129 09:16:29.115001  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1129 09:16:29.115061  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.136437  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.138052  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1129 09:16:29.142387  302940 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 09:16:29.142412  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1129 09:16:29.142478  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.162159  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.165580  302940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:16:29.178386  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.208571  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.208570  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.209474  302940 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:16:29.209492  302940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:16:29.209558  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.217311  302940 out.go:179]   - Using image docker.io/busybox:stable
	I1129 09:16:29.222247  302940 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1129 09:16:29.230278  302940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 09:16:29.230305  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1129 09:16:29.230372  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.255998  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.274881  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.276256  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.284435  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.294544  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.309382  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.324533  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.325160  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.339306  302940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:29.797362  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 09:16:29.797442  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1129 09:16:29.845210  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1129 09:16:29.845230  302940 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1129 09:16:29.884404  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1129 09:16:29.884427  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1129 09:16:29.915393  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 09:16:29.919884  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:16:29.920193  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 09:16:29.923306  302940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1129 09:16:29.923328  302940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1129 09:16:29.927365  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 09:16:29.931563  302940 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1129 09:16:29.931582  302940 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1129 09:16:29.937898  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 09:16:29.942610  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1129 09:16:29.945063  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:16:29.948410  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 09:16:29.948485  302940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 09:16:30.007495  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1129 09:16:30.007577  302940 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1129 09:16:30.039541  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 09:16:30.064648  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1129 09:16:30.074680  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1129 09:16:30.074763  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1129 09:16:30.145571  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:16:30.145654  302940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 09:16:30.148443  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 09:16:30.160520  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1129 09:16:30.160608  302940 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1129 09:16:30.163744  302940 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1129 09:16:30.163819  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1129 09:16:30.167411  302940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1129 09:16:30.167485  302940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1129 09:16:30.272967  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1129 09:16:30.273045  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1129 09:16:30.351584  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1129 09:16:30.351654  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1129 09:16:30.363833  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1129 09:16:30.375601  302940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1129 09:16:30.375682  302940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1129 09:16:30.388383  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:16:30.468308  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1129 09:16:30.468387  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1129 09:16:30.537016  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1129 09:16:30.542395  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1129 09:16:30.542472  302940 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1129 09:16:30.649385  302940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.483769722s)
	I1129 09:16:30.649481  302940 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.310150726s)
	I1129 09:16:30.650247  302940 node_ready.go:35] waiting up to 6m0s for node "addons-937561" to be "Ready" ...
	I1129 09:16:30.650446  302940 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1129 09:16:30.660657  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1129 09:16:30.660739  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1129 09:16:30.773083  302940 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 09:16:30.773156  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1129 09:16:31.102970  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 09:16:31.113906  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1129 09:16:31.113928  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1129 09:16:31.158437  302940 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-937561" context rescaled to 1 replicas
	I1129 09:16:31.286283  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1129 09:16:31.286355  302940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1129 09:16:31.452516  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1129 09:16:31.452589  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1129 09:16:31.719853  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1129 09:16:31.719925  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1129 09:16:31.880059  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 09:16:31.880139  302940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1129 09:16:32.088806  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1129 09:16:32.661042  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:34.653192  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.73771722s)
	I1129 09:16:34.653225  302940 addons.go:495] Verifying addon ingress=true in "addons-937561"
	I1129 09:16:34.653429  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.73318372s)
	I1129 09:16:34.653471  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.73352089s)
	I1129 09:16:34.653660  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.726275789s)
	I1129 09:16:34.653708  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.715793567s)
	I1129 09:16:34.653771  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.711094002s)
	I1129 09:16:34.653817  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.708692871s)
	I1129 09:16:34.653852  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.614233282s)
	I1129 09:16:34.653879  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.589154832s)
	I1129 09:16:34.653923  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.505411052s)
	I1129 09:16:34.654041  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.290135541s)
	I1129 09:16:34.654059  302940 addons.go:495] Verifying addon registry=true in "addons-937561"
	I1129 09:16:34.654404  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.26593204s)
	I1129 09:16:34.654428  302940 addons.go:495] Verifying addon metrics-server=true in "addons-937561"
	I1129 09:16:34.654469  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117379235s)
	I1129 09:16:34.654721  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.551721969s)
	W1129 09:16:34.655824  302940 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 09:16:34.655854  302940 retry.go:31] will retry after 147.983939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 09:16:34.656714  302940 out.go:179] * Verifying ingress addon...
	I1129 09:16:34.656754  302940 out.go:179] * Verifying registry addon...
	I1129 09:16:34.659029  302940 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-937561 service yakd-dashboard -n yakd-dashboard
	
	I1129 09:16:34.661547  302940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1129 09:16:34.662216  302940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1129 09:16:34.663115  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:34.688336  302940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1129 09:16:34.688358  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:34.688450  302940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 09:16:34.688472  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1129 09:16:34.690861  302940 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1129 09:16:34.804549  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 09:16:35.029840  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.940982209s)
	I1129 09:16:35.029931  302940 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-937561"
	I1129 09:16:35.033106  302940 out.go:179] * Verifying csi-hostpath-driver addon...
	I1129 09:16:35.036956  302940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1129 09:16:35.047733  302940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 09:16:35.047806  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:35.178288  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:35.179089  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:35.541040  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:35.666747  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:35.667991  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:36.043117  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:36.166931  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:36.167299  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:36.540898  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:36.578175  302940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1129 09:16:36.578284  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:36.594754  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:36.666755  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:36.666822  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:36.707380  302940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1129 09:16:36.720694  302940 addons.go:239] Setting addon gcp-auth=true in "addons-937561"
	I1129 09:16:36.720745  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:36.721210  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:36.739615  302940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1129 09:16:36.739691  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:36.756529  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:37.040779  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1129 09:16:37.153684  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:37.166126  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:37.166502  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:37.535652  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.730993793s)
	I1129 09:16:37.538573  302940 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1129 09:16:37.541111  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:37.544044  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 09:16:37.546996  302940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1129 09:16:37.547018  302940 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1129 09:16:37.562933  302940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1129 09:16:37.563000  302940 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1129 09:16:37.578571  302940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 09:16:37.578598  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1129 09:16:37.591437  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 09:16:37.668398  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:37.668999  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:38.052330  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:38.083616  302940 addons.go:495] Verifying addon gcp-auth=true in "addons-937561"
	I1129 09:16:38.087256  302940 out.go:179] * Verifying gcp-auth addon...
	I1129 09:16:38.091913  302940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1129 09:16:38.097222  302940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1129 09:16:38.097249  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:38.167178  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:38.167497  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:38.540914  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:38.594757  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:38.666327  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:38.666450  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:39.040760  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:39.095559  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:39.167038  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:39.167248  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:39.540166  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:39.595077  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:39.653639  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:39.665544  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:39.665949  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:40.047336  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:40.095780  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:40.167146  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:40.169620  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:40.540916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:40.595508  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:40.665468  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:40.665513  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:41.040436  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:41.095451  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:41.165823  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:41.166096  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:41.541185  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:41.595054  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:41.654104  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:41.666546  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:41.666764  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:42.040231  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:42.095872  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:42.167695  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:42.168317  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:42.540347  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:42.595889  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:42.665839  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:42.665978  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:43.039866  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:43.095548  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:43.166428  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:43.166607  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:43.541008  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:43.596014  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:43.665876  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:43.665940  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:44.039798  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:44.095547  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:44.153249  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:44.166569  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:44.166724  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:44.539876  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:44.595401  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:44.666000  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:44.666099  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:45.043942  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:45.099413  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:45.169733  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:45.181094  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:45.542168  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:45.595089  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:45.665415  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:45.666003  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:46.040449  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:46.095411  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:46.166484  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:46.166603  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:46.540931  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:46.595678  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:46.653472  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:46.666153  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:46.666318  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:47.040660  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:47.095290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:47.165316  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:47.165900  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:47.540351  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:47.594976  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:47.665898  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:47.666494  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:48.040852  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:48.095545  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:48.165851  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:48.165900  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:48.539944  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:48.594851  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:48.653756  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:48.665801  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:48.665917  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:49.039771  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:49.095428  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:49.165766  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:49.166681  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:49.541027  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:49.594833  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:49.665596  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:49.665692  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:50.040296  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:50.095441  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:50.166802  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:50.167308  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:50.540359  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:50.595220  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:50.653855  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:50.665936  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:50.666001  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:51.040170  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:51.095173  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:51.166762  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:51.166889  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:51.540436  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:51.595310  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:51.665963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:51.666367  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:52.040929  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:52.095086  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:52.166105  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:52.166690  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:52.539832  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:52.595997  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:52.654110  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:52.666453  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:52.666574  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:53.041662  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:53.095957  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:53.166242  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:53.166539  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:53.541045  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:53.594863  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:53.666111  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:53.666429  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:54.040576  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:54.095734  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:54.167033  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:54.167146  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:54.540423  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:54.595346  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:54.666023  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:54.666399  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:55.040688  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:55.095614  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:55.153541  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:55.166530  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:55.166654  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:55.541005  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:55.595680  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:55.665735  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:55.665808  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:56.040650  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:56.094870  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:56.165929  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:56.166679  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:56.540646  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:56.595573  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:56.665593  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:56.665811  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:57.039884  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:57.094755  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:57.153811  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:57.165990  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:57.166175  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:57.540998  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:57.596558  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:57.666345  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:57.666505  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:58.041020  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:58.095692  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:58.166514  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:58.167149  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:58.540457  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:58.595874  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:58.665656  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:58.666376  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:59.040940  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:59.095530  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:59.166884  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:59.167624  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:59.539798  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:59.596133  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:59.653647  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:59.665750  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:59.665994  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:00.051067  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:00.098382  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:00.184333  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:00.192703  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:00.540524  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:00.596286  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:00.666188  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:00.666555  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:01.040963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:01.095248  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:01.166900  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:01.167339  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:01.541205  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:01.595402  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:01.654011  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:01.666387  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:01.666517  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:02.041041  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:02.095223  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:02.166689  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:02.166761  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:02.539901  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:02.595142  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:02.666060  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:02.666289  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:03.040637  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:03.095613  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:03.166380  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:03.166420  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:03.540993  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:03.595224  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:03.666019  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:03.666111  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:04.040467  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:04.095808  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:04.153973  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:04.166138  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:04.166472  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:04.540932  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:04.595253  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:04.665727  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:04.666164  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:05.040163  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:05.095449  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:05.165849  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:05.166325  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:05.540716  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:05.595777  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:05.666457  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:05.666544  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:06.040905  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:06.094874  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:06.165715  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:06.166195  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:06.540575  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:06.595524  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:06.653361  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:06.665934  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:06.665970  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:07.041044  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:07.095192  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:07.165478  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:07.165746  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:07.540063  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:07.595107  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:07.665729  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:07.666155  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:08.040509  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:08.095686  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:08.167096  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:08.167279  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:08.540341  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:08.595331  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:08.666125  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:08.666433  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:09.040672  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:09.095431  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:09.153357  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:09.165838  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:09.165900  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:09.551138  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:09.666499  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:09.683956  302940 node_ready.go:49] node "addons-937561" is "Ready"
	I1129 09:17:09.683983  302940 node_ready.go:38] duration metric: took 39.033719015s for node "addons-937561" to be "Ready" ...
	I1129 09:17:09.683996  302940 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:17:09.684051  302940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:17:09.708433  302940 api_server.go:72] duration metric: took 41.126536472s to wait for apiserver process to appear ...
	I1129 09:17:09.708454  302940 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:17:09.708473  302940 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1129 09:17:09.710355  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:09.710915  302940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 09:17:09.710954  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:09.718827  302940 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1129 09:17:09.722626  302940 api_server.go:141] control plane version: v1.34.1
	I1129 09:17:09.722704  302940 api_server.go:131] duration metric: took 14.243569ms to wait for apiserver health ...
	I1129 09:17:09.722729  302940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:17:09.736534  302940 system_pods.go:59] 19 kube-system pods found
	I1129 09:17:09.736617  302940 system_pods.go:61] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending
	I1129 09:17:09.736638  302940 system_pods.go:61] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending
	I1129 09:17:09.736659  302940 system_pods.go:61] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending
	I1129 09:17:09.736699  302940 system_pods.go:61] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending
	I1129 09:17:09.736717  302940 system_pods.go:61] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:09.736736  302940 system_pods.go:61] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:09.736769  302940 system_pods.go:61] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:09.736793  302940 system_pods.go:61] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:09.736813  302940 system_pods.go:61] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending
	I1129 09:17:09.736846  302940 system_pods.go:61] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:09.736871  302940 system_pods.go:61] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:09.736889  302940 system_pods.go:61] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending
	I1129 09:17:09.736909  302940 system_pods.go:61] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending
	I1129 09:17:09.736940  302940 system_pods.go:61] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending
	I1129 09:17:09.736966  302940 system_pods.go:61] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending
	I1129 09:17:09.736984  302940 system_pods.go:61] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending
	I1129 09:17:09.737016  302940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending
	I1129 09:17:09.737044  302940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:09.737065  302940 system_pods.go:61] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending
	I1129 09:17:09.737102  302940 system_pods.go:74] duration metric: took 14.353224ms to wait for pod list to return data ...
	I1129 09:17:09.737129  302940 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:17:09.741595  302940 default_sa.go:45] found service account: "default"
	I1129 09:17:09.741669  302940 default_sa.go:55] duration metric: took 4.520643ms for default service account to be created ...
	I1129 09:17:09.741693  302940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:17:09.754312  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:09.754397  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending
	I1129 09:17:09.754422  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:09.754460  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending
	I1129 09:17:09.754484  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending
	I1129 09:17:09.754501  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:09.754521  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:09.754555  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:09.754581  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:09.754601  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending
	I1129 09:17:09.754635  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:09.754661  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:09.754681  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending
	I1129 09:17:09.754716  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending
	I1129 09:17:09.754743  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending
	I1129 09:17:09.754781  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending
	I1129 09:17:09.754803  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending
	I1129 09:17:09.754821  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending
	I1129 09:17:09.754843  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:09.754879  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending
	I1129 09:17:09.754907  302940 retry.go:31] will retry after 310.106848ms: missing components: kube-dns
	I1129 09:17:10.042294  302940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 09:17:10.042422  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:10.075620  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:10.075707  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:10.075733  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:10.075772  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:10.075798  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending
	I1129 09:17:10.075819  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:10.075858  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:10.075883  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:10.075904  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:10.075948  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:10.075977  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:10.076000  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:10.076035  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending
	I1129 09:17:10.076059  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending
	I1129 09:17:10.076080  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending
	I1129 09:17:10.076120  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:10.076149  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending
	I1129 09:17:10.076174  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending
	I1129 09:17:10.076224  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.076244  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending
	I1129 09:17:10.076293  302940 retry.go:31] will retry after 374.335809ms: missing components: kube-dns
	I1129 09:17:10.097499  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:10.168019  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:10.168284  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:10.458232  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:10.458321  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:10.458344  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:10.458385  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:10.458411  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:10.458429  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:10.458466  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:10.458489  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:10.458509  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:10.458547  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:10.458570  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:10.458589  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:10.458626  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:10.458653  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:10.458675  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:10.458711  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:10.458737  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:10.458762  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.458797  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.458824  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:17:10.458869  302940 retry.go:31] will retry after 364.995744ms: missing components: kube-dns
	I1129 09:17:10.541019  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:10.595403  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:10.666287  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:10.667253  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:10.828829  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:10.828929  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:10.828962  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:10.828987  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:10.829023  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:10.829051  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:10.829075  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:10.829108  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:10.829136  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:10.829163  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:10.829195  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:10.829224  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:10.829250  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:10.829291  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:10.829314  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:10.829343  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:10.829381  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:10.829403  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.829428  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.829460  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:10.829501  302940 retry.go:31] will retry after 487.701256ms: missing components: kube-dns
	I1129 09:17:11.041264  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:11.096419  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:11.168437  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:11.168873  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:11.324780  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:11.324876  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:11.324906  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:11.324928  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:11.324968  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:11.324990  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:11.325013  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:11.325046  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:11.325066  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:11.325094  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:11.325126  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:11.325156  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:11.325187  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:11.325218  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:11.325249  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:11.325288  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:11.325312  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:11.325359  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.325382  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.325404  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:11.325450  302940 retry.go:31] will retry after 624.811464ms: missing components: kube-dns
	I1129 09:17:11.540517  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:11.595583  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:11.666558  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:11.667665  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:11.955068  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:11.955107  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:11.955117  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:11.955125  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:11.955132  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:11.955136  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:11.955141  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:11.955145  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:11.955156  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:11.955163  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:11.955170  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:11.955175  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:11.955181  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:11.955188  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:11.955199  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:11.955205  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:11.955212  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:11.955221  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.955229  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.955233  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:11.955248  302940 retry.go:31] will retry after 628.756685ms: missing components: kube-dns
	I1129 09:17:12.040654  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:12.097654  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:12.197354  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:12.197499  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:12.540912  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:12.589178  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:12.589219  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:12.589231  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:12.589249  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:12.589260  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:12.589265  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:12.589278  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:12.589283  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:12.589288  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:12.589300  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:12.589304  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:12.589309  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:12.589323  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:12.589336  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:12.589342  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:12.589348  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:12.589355  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:12.589367  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:12.589375  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:12.589379  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:12.589406  302940 retry.go:31] will retry after 753.534635ms: missing components: kube-dns
	I1129 09:17:12.595055  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:12.666667  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:12.666765  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:13.040572  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:13.095146  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:13.167327  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:13.167438  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:13.347795  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:13.347831  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Running
	I1129 09:17:13.347843  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:13.347851  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:13.347860  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:13.347864  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:13.347869  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:13.347881  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:13.347885  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:13.347895  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:13.347899  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:13.347906  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:13.347914  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:13.347924  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:13.347931  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:13.347937  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:13.347945  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:13.347951  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:13.347958  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:13.347963  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:13.347973  302940 system_pods.go:126] duration metric: took 3.606260532s to wait for k8s-apps to be running ...
	I1129 09:17:13.347984  302940 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:17:13.348041  302940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:13.363923  302940 system_svc.go:56] duration metric: took 15.919489ms WaitForService to wait for kubelet
	I1129 09:17:13.363992  302940 kubeadm.go:587] duration metric: took 44.782100402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:13.364049  302940 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:17:13.367221  302940 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:17:13.367294  302940 node_conditions.go:123] node cpu capacity is 2
	I1129 09:17:13.367322  302940 node_conditions.go:105] duration metric: took 3.25505ms to run NodePressure ...
	I1129 09:17:13.367347  302940 start.go:242] waiting for startup goroutines ...
	I1129 09:17:13.541247  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:13.595559  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:13.667044  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:13.667833  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:14.040826  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:14.095794  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:14.175051  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:14.175583  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:14.541409  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:14.595577  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:14.667201  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:14.667500  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:15.041894  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:15.095242  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:15.167145  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:15.167578  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:15.541874  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:15.595388  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:15.666091  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:15.667571  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:16.041680  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:16.094929  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:16.166833  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:16.167747  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:16.541930  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:16.596013  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:16.667708  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:16.668094  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:17.040743  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:17.095989  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:17.166178  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:17.167644  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:17.540613  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:17.595520  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:17.665734  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:17.665918  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:18.041754  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:18.095791  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:18.166884  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:18.167263  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:18.541536  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:18.595547  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:18.670136  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:18.670328  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:19.043220  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:19.143752  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:19.167117  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:19.167269  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:19.540713  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:19.595704  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:19.665881  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:19.666237  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:20.040971  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:20.095083  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:20.166239  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:20.166925  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:20.540846  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:20.595177  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:20.666950  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:20.667251  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:21.041805  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:21.095460  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:21.166520  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:21.167883  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:21.542444  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:21.595479  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:21.667264  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:21.667611  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:22.041497  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:22.095684  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:22.168314  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:22.168395  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:22.541916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:22.595413  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:22.667540  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:22.667815  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:23.040983  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:23.095052  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:23.167770  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:23.167962  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:23.541718  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:23.596064  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:23.667174  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:23.667753  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:24.042043  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:24.095436  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:24.167434  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:24.167669  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:24.540746  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:24.595871  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:24.666127  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:24.666210  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:25.041186  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:25.095369  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:25.166927  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:25.167413  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:25.542889  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:25.595225  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:25.668854  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:25.668423  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:26.040706  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:26.095022  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:26.167532  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:26.167803  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:26.541422  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:26.595502  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:26.666409  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:26.666617  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:27.041692  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:27.095773  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:27.167623  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:27.168047  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:27.540791  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:27.596156  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:27.668471  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:27.668930  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:28.040785  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:28.096284  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:28.167459  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:28.167998  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:28.540670  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:28.595249  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:28.665213  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:28.665774  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:29.041991  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:29.095290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:29.166709  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:29.166956  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:29.540699  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:29.595745  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:29.666320  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:29.666793  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:30.049373  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:30.103192  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:30.166134  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:30.166288  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:30.540548  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:30.595489  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:30.667152  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:30.667290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:31.041525  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:31.096205  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:31.167212  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:31.167622  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:31.541386  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:31.596186  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:31.666290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:31.667058  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:32.040893  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:32.095151  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:32.165992  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:32.166824  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:32.541789  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:32.595662  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:32.666732  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:32.667846  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:33.040817  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:33.095635  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:33.166653  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:33.167400  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:33.541397  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:33.595230  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:33.667062  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:33.667410  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:34.042729  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:34.095830  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:34.166875  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:34.167152  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:34.540508  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:34.595517  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:34.666885  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:34.667150  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:35.040647  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:35.095821  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:35.166951  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:35.167472  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:35.541511  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:35.595505  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:35.666243  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:35.666357  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:36.040680  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:36.094698  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:36.166805  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:36.167528  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:36.544218  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:36.595383  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:36.666033  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:36.666762  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:37.040768  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:37.095879  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:37.167367  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:37.167544  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:37.541006  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:37.595268  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:37.666109  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:37.666379  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:38.041305  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:38.095813  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:38.168305  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:38.169025  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:38.541729  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:38.641565  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:38.670256  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:38.670713  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:39.041915  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:39.095320  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:39.166835  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:39.167351  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:39.548587  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:39.596131  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:39.666817  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:39.667463  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:40.041818  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:40.095229  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:40.171512  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:40.171724  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:40.541209  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:40.642183  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:40.665476  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:40.665696  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:41.045730  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:41.095609  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:41.167482  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:41.167819  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:41.541239  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:41.616012  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:41.668646  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:41.668868  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:42.041329  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:42.150227  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:42.166812  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:42.167070  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:42.541524  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:42.641038  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:42.667356  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:42.667523  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:43.041458  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:43.095246  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:43.166232  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:43.166289  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:43.541526  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:43.595919  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:43.668040  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:43.668494  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:44.042504  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:44.095848  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:44.168606  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:44.178134  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:44.540995  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:44.595853  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:44.667635  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:44.668039  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:45.065709  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:45.096961  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:45.168963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:45.170529  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:45.541241  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:45.595654  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:45.666881  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:45.667072  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:46.040959  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:46.095944  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:46.168503  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:46.169244  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:46.541321  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:46.595208  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:46.668580  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:46.669386  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:47.042063  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:47.095817  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:47.167354  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:47.167832  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:47.540311  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:47.595243  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:47.665717  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:47.666210  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:48.040731  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:48.095620  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:48.165957  302940 kapi.go:107] duration metric: took 1m13.503738263s to wait for kubernetes.io/minikube-addons=registry ...
	I1129 09:17:48.166407  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:48.541600  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:48.595916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:48.668167  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:49.040867  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:49.094903  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:49.176054  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:49.540955  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:49.595369  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:49.673279  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:50.041265  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:50.095877  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:50.165824  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:50.540941  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:50.594698  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:50.665785  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:51.040348  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:51.095279  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:51.165373  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:51.541754  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:51.596068  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:51.666328  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:52.051370  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:52.151355  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:52.165386  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:52.540452  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:52.595033  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:52.666228  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:53.041363  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:53.097032  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:53.168158  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:53.542332  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:53.641943  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:53.667262  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:54.041029  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:54.095446  302940 kapi.go:107] duration metric: took 1m16.003518182s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1129 09:17:54.098799  302940 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-937561 cluster.
	I1129 09:17:54.101879  302940 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1129 09:17:54.104912  302940 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1129 09:17:54.165722  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:54.541264  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:54.666120  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:55.041467  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:55.165441  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:55.540811  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:55.665732  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:56.040343  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:56.166764  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:56.540740  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:56.665962  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:57.041398  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:57.165704  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:57.540963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:57.665799  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:58.041071  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:58.166895  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:58.539916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:58.665765  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:59.040921  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:59.178304  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:59.547504  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:59.667333  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:00.052586  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:00.235670  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:00.543490  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:00.665523  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:01.041494  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:01.165913  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:01.540778  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:01.667541  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:02.041177  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:02.167907  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:02.540722  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:02.665638  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:03.040403  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:03.165941  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:03.551549  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:03.665951  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:04.040210  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:04.166262  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:04.541019  302940 kapi.go:107] duration metric: took 1m29.504062995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1129 09:18:04.666158  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:05.166332  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:05.666433  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:06.165970  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:06.666436  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:07.165936  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:07.665168  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:08.167469  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:08.666373  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:09.166182  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:09.665904  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:10.165399  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:10.666273  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:11.166170  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:11.665583  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:12.166040  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:12.665485  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:13.166784  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:13.665777  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:14.166641  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:14.666556  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:15.168142  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:15.666524  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:16.166306  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:16.665567  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:17.166671  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:17.666252  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:18.165859  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:18.666415  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:19.166782  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:19.668657  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:20.166319  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:20.665831  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:21.168818  302940 kapi.go:107] duration metric: took 1m46.507273702s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1129 09:18:21.171943  302940 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, storage-provisioner, registry-creds, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1129 09:18:21.174959  302940 addons.go:530] duration metric: took 1m52.592645975s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin ingress-dns inspektor-gadget storage-provisioner registry-creds cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1129 09:18:21.175021  302940 start.go:247] waiting for cluster config update ...
	I1129 09:18:21.175046  302940 start.go:256] writing updated cluster config ...
	I1129 09:18:21.175334  302940 ssh_runner.go:195] Run: rm -f paused
	I1129 09:18:21.180291  302940 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:18:21.184418  302940 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dwkbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.189696  302940 pod_ready.go:94] pod "coredns-66bc5c9577-dwkbv" is "Ready"
	I1129 09:18:21.189731  302940 pod_ready.go:86] duration metric: took 5.280941ms for pod "coredns-66bc5c9577-dwkbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.196418  302940 pod_ready.go:83] waiting for pod "etcd-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.204765  302940 pod_ready.go:94] pod "etcd-addons-937561" is "Ready"
	I1129 09:18:21.204792  302940 pod_ready.go:86] duration metric: took 8.347074ms for pod "etcd-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.269630  302940 pod_ready.go:83] waiting for pod "kube-apiserver-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.274291  302940 pod_ready.go:94] pod "kube-apiserver-addons-937561" is "Ready"
	I1129 09:18:21.274321  302940 pod_ready.go:86] duration metric: took 4.664669ms for pod "kube-apiserver-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.276613  302940 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.584445  302940 pod_ready.go:94] pod "kube-controller-manager-addons-937561" is "Ready"
	I1129 09:18:21.584477  302940 pod_ready.go:86] duration metric: took 307.839749ms for pod "kube-controller-manager-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.785225  302940 pod_ready.go:83] waiting for pod "kube-proxy-79sbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.184722  302940 pod_ready.go:94] pod "kube-proxy-79sbl" is "Ready"
	I1129 09:18:22.184752  302940 pod_ready.go:86] duration metric: took 399.497579ms for pod "kube-proxy-79sbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.396095  302940 pod_ready.go:83] waiting for pod "kube-scheduler-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.784250  302940 pod_ready.go:94] pod "kube-scheduler-addons-937561" is "Ready"
	I1129 09:18:22.784322  302940 pod_ready.go:86] duration metric: took 388.195347ms for pod "kube-scheduler-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.784342  302940 pod_ready.go:40] duration metric: took 1.604006225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:18:22.844151  302940 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:18:22.848957  302940 out.go:179] * Done! kubectl is now configured to use "addons-937561" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.624994695Z" level=info msg="Removed container 473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca: kube-system/registry-creds-764b6fb674-8q8xm/registry-creds" id=9394448b-ef53-47b8-88cc-41c1f327de55 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.834506591Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-64z4j/POD" id=eae400ae-3271-411e-868a-d9a4384c3224 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.834578009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.858725451Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-64z4j Namespace:default ID:00a4fcec55c6c33f9a7fc3a99f46430c47c172778d905d39d8db2688df58e270 UID:1bd65aff-9433-4ece-b991-660b39b767ce NetNS:/var/run/netns/305460f4-5f88-4b2a-8929-33fd2c0b2f8c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400239a270}] Aliases:map[]}"
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.862152936Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-64z4j to CNI network \"kindnet\" (type=ptp)"
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.884912183Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-64z4j Namespace:default ID:00a4fcec55c6c33f9a7fc3a99f46430c47c172778d905d39d8db2688df58e270 UID:1bd65aff-9433-4ece-b991-660b39b767ce NetNS:/var/run/netns/305460f4-5f88-4b2a-8929-33fd2c0b2f8c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400239a270}] Aliases:map[]}"
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.887572199Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-64z4j for CNI network kindnet (type=ptp)"
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.89884542Z" level=info msg="Ran pod sandbox 00a4fcec55c6c33f9a7fc3a99f46430c47c172778d905d39d8db2688df58e270 with infra container: default/hello-world-app-5d498dc89-64z4j/POD" id=eae400ae-3271-411e-868a-d9a4384c3224 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.900457832Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=12ad4d94-ee1f-42b3-958f-88b546c98e7a name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.900723214Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=12ad4d94-ee1f-42b3-958f-88b546c98e7a name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.900832689Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=12ad4d94-ee1f-42b3-958f-88b546c98e7a name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.901851895Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e6ff7e51-3ed9-465d-800d-7f2e72f7e353 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:21:22 addons-937561 crio[828]: time="2025-11-29T09:21:22.908678204Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.631762574Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=e6ff7e51-3ed9-465d-800d-7f2e72f7e353 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.632431645Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=387bc5e1-4fff-41f5-82cf-f8eda03bd8e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.635433681Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bd6bb0fa-a132-47fc-a2d6-f345708c71c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.642303174Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-64z4j/hello-world-app" id=b223d062-f2e8-454f-8178-e01c4cebee81 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.642412172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.654730832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.655147962Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0f2a92d17882404996e3396a4b1d7e14eba5441ff76204e1648fcf63c4a1ed15/merged/etc/passwd: no such file or directory"
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.655277129Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0f2a92d17882404996e3396a4b1d7e14eba5441ff76204e1648fcf63c4a1ed15/merged/etc/group: no such file or directory"
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.657123924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.6853448Z" level=info msg="Created container 75ab3352956ccd8ed51474bbb1c382de3079d319dc1fdfe283746895c24e3b67: default/hello-world-app-5d498dc89-64z4j/hello-world-app" id=b223d062-f2e8-454f-8178-e01c4cebee81 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.686342549Z" level=info msg="Starting container: 75ab3352956ccd8ed51474bbb1c382de3079d319dc1fdfe283746895c24e3b67" id=4390c53a-ea90-4d40-a521-35b686d16fef name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:21:23 addons-937561 crio[828]: time="2025-11-29T09:21:23.691986803Z" level=info msg="Started container" PID=7163 containerID=75ab3352956ccd8ed51474bbb1c382de3079d319dc1fdfe283746895c24e3b67 description=default/hello-world-app-5d498dc89-64z4j/hello-world-app id=4390c53a-ea90-4d40-a521-35b686d16fef name=/runtime.v1.RuntimeService/StartContainer sandboxID=00a4fcec55c6c33f9a7fc3a99f46430c47c172778d905d39d8db2688df58e270
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	75ab3352956cc       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   00a4fcec55c6c       hello-world-app-5d498dc89-64z4j            default
	1499838d046cc       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago            Exited              registry-creds                           1                   ec8801e95a767       registry-creds-764b6fb674-8q8xm            kube-system
	f18f8ac3372f1       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   fb7a88e0ef361       nginx                                      default
	6f60b262aa11c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   3af909eca0065       busybox                                    default
	1e4aa857cf0de       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   026780566ef70       ingress-nginx-controller-6c8bf45fb-8gjmc   ingress-nginx
	aef9a73b52cb1       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   30c18f3e3a89b       ingress-nginx-admission-patch-t6l5q        ingress-nginx
	3c1b8b66c425e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	9aef6b7b60e4c       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	f225ca290de28       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	30eb3a8c8cd59       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	12a5a97ec92c6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   67b94c2a133ba       gadget-dz9wn                               gadget
	e6e40e77afa28       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	d49a00843822c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   79a1b326999a7       gcp-auth-78565c9fb4-bz6gr                  gcp-auth
	cfb15c1680321       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   4ed3d5046030e       ingress-nginx-admission-create-cnhs2       ingress-nginx
	506dbad310eb8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   af45e2a4f9373       local-path-provisioner-648f6765c9-v587p    local-path-storage
	11d43a48abd4b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   856211fae8341       registry-proxy-5t68c                       kube-system
	ffd3ddcf27f55       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   606b0d42d4242       metrics-server-85b7d694d7-jfpt2            kube-system
	af2e25ba59276       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   a9d5dfc1d1ced       csi-hostpath-resizer-0                     kube-system
	c8fe1df2373bb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   ce30468d1e40f       snapshot-controller-7d9fbc56b8-9qqmq       kube-system
	b9fd6b139f9a6       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   d5a70d37926f5       registry-6b586f9694-9wb6d                  kube-system
	66fc5abcc6517       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   f0ed73780ecb5       kube-ingress-dns-minikube                  kube-system
	f8cb526e085ff       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	0c0ef85d8b377       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   1dbb61d0109c7       nvidia-device-plugin-daemonset-2kd5l       kube-system
	fdb66785f2ceb       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   5ffb1adfc5fe5       cloud-spanner-emulator-5bdddb765-42lcn     default
	5bc214d6f747a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   bed435b65f844       snapshot-controller-7d9fbc56b8-nbcng       kube-system
	48a8f333ea4b4       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   f4c29b965f050       yakd-dashboard-5ff678cb9-n2vjf             yakd-dashboard
	6159812cd62ca       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   17f6b766c5f7f       csi-hostpath-attacher-0                    kube-system
	cea7127d80def       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   e87c28e4780ae       coredns-66bc5c9577-dwkbv                   kube-system
	d8b057511cccc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   55e74c9f25eba       storage-provisioner                        kube-system
	febc943f90d57       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   075160918c208       kindnet-wk9nw                              kube-system
	8f16da7a481b2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   b86de4cf9d6c7       kube-proxy-79sbl                           kube-system
	1f72b846137bb       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   ce1005629d6f4       kube-apiserver-addons-937561               kube-system
	b28d6a65a1d2e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   1fa1b9c0e6c15       etcd-addons-937561                         kube-system
	465f08cb21ea0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   5a1dd785d7c17       kube-scheduler-addons-937561               kube-system
	c0d24f1fa0e94       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   5bea9984245ce       kube-controller-manager-addons-937561      kube-system
	
	
	==> coredns [cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57] <==
	[INFO] 10.244.0.9:44133 - 4 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001674779s
	[INFO] 10.244.0.9:44133 - 1748 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000141631s
	[INFO] 10.244.0.9:44133 - 37158 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00013843s
	[INFO] 10.244.0.9:40001 - 4153 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014675s
	[INFO] 10.244.0.9:40001 - 3931 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096371s
	[INFO] 10.244.0.9:55967 - 34097 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090537s
	[INFO] 10.244.0.9:55967 - 33883 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000207592s
	[INFO] 10.244.0.9:49430 - 22791 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00013048s
	[INFO] 10.244.0.9:49430 - 22619 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164852s
	[INFO] 10.244.0.9:59642 - 29718 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00119835s
	[INFO] 10.244.0.9:59642 - 29512 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001325162s
	[INFO] 10.244.0.9:33703 - 64004 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120034s
	[INFO] 10.244.0.9:33703 - 64408 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150788s
	[INFO] 10.244.0.19:45452 - 57458 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199092s
	[INFO] 10.244.0.19:35099 - 40318 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000091308s
	[INFO] 10.244.0.19:56134 - 2462 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196007s
	[INFO] 10.244.0.19:35396 - 48428 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130627s
	[INFO] 10.244.0.19:52016 - 30471 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000159395s
	[INFO] 10.244.0.19:32899 - 6704 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094688s
	[INFO] 10.244.0.19:40342 - 34745 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002393271s
	[INFO] 10.244.0.19:49767 - 60613 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002387601s
	[INFO] 10.244.0.19:43120 - 40168 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002766585s
	[INFO] 10.244.0.19:41761 - 125 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003199082s
	[INFO] 10.244.0.23:43725 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000234834s
	[INFO] 10.244.0.23:39592 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130332s
	
	
	==> describe nodes <==
	Name:               addons-937561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-937561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=addons-937561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-937561
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-937561"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-937561
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:21:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:19:58 +0000   Sat, 29 Nov 2025 09:16:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:19:58 +0000   Sat, 29 Nov 2025 09:16:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:19:58 +0000   Sat, 29 Nov 2025 09:16:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:19:58 +0000   Sat, 29 Nov 2025 09:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-937561
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                829b03eb-db97-4d35-b80b-ed10fd5f92a5
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-5bdddb765-42lcn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  default                     hello-world-app-5d498dc89-64z4j             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-dz9wn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-bz6gr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-8gjmc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-dwkbv                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m56s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-w96sq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 etcd-addons-937561                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m3s
	  kube-system                 kindnet-wk9nw                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m56s
	  kube-system                 kube-apiserver-addons-937561                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-addons-937561       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-proxy-79sbl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-937561                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 metrics-server-85b7d694d7-jfpt2             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-2kd5l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 registry-6b586f9694-9wb6d                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-8q8xm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-5t68c                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 snapshot-controller-7d9fbc56b8-9qqmq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-nbcng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-v587p     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-n2vjf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node addons-937561 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node addons-937561 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s (x8 over 5m9s)  kubelet          Node addons-937561 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m2s                 kubelet          Node addons-937561 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s                 kubelet          Node addons-937561 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s                 kubelet          Node addons-937561 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m57s                node-controller  Node addons-937561 event: Registered Node addons-937561 in Controller
	  Normal   NodeReady                4m15s                kubelet          Node addons-937561 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015149] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507546] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034739] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.833095] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +4.564053] kauditd_printk_skb: 35 callbacks suppressed
	[Nov29 08:31] hrtimer: interrupt took 8840027 ns
	[Nov29 09:14] kauditd_printk_skb: 8 callbacks suppressed
	[Nov29 09:16] overlayfs: idmapped layers are currently not supported
	[  +0.067811] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672] <==
	{"level":"warn","ts":"2025-11-29T09:16:18.123699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.146697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.160233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.183947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.199244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.217458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.231864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.247743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.275478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.289527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.326943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.331001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.344396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.370365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.393504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.430723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.445152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.486660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.579567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:35.142832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:35.158630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.323896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.338893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.386366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.401800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52732","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d49a00843822cbbf937c36479a92273ed7e64c8b86c3121dd1744c340c7bdb6e] <==
	2025/11/29 09:17:53 GCP Auth Webhook started!
	2025/11/29 09:18:23 Ready to marshal response ...
	2025/11/29 09:18:23 Ready to write response ...
	2025/11/29 09:18:23 Ready to marshal response ...
	2025/11/29 09:18:23 Ready to write response ...
	2025/11/29 09:18:24 Ready to marshal response ...
	2025/11/29 09:18:24 Ready to write response ...
	2025/11/29 09:18:45 Ready to marshal response ...
	2025/11/29 09:18:45 Ready to write response ...
	2025/11/29 09:18:49 Ready to marshal response ...
	2025/11/29 09:18:49 Ready to write response ...
	2025/11/29 09:18:50 Ready to marshal response ...
	2025/11/29 09:18:50 Ready to write response ...
	2025/11/29 09:18:57 Ready to marshal response ...
	2025/11/29 09:18:57 Ready to write response ...
	2025/11/29 09:19:02 Ready to marshal response ...
	2025/11/29 09:19:02 Ready to write response ...
	2025/11/29 09:19:12 Ready to marshal response ...
	2025/11/29 09:19:12 Ready to write response ...
	2025/11/29 09:19:44 Ready to marshal response ...
	2025/11/29 09:19:44 Ready to write response ...
	2025/11/29 09:21:22 Ready to marshal response ...
	2025/11/29 09:21:22 Ready to write response ...
	
	
	==> kernel <==
	 09:21:24 up  2:03,  0 user,  load average: 0.49, 1.74, 2.74
	Linux addons-937561 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9] <==
	I1129 09:19:19.347291       1 main.go:301] handling current node
	I1129 09:19:29.346940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:19:29.347066       1 main.go:301] handling current node
	I1129 09:19:39.350373       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:19:39.350407       1 main.go:301] handling current node
	I1129 09:19:49.347501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:19:49.347534       1 main.go:301] handling current node
	I1129 09:19:59.346758       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:19:59.346790       1 main.go:301] handling current node
	I1129 09:20:09.353567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:20:09.353604       1 main.go:301] handling current node
	I1129 09:20:19.351045       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:20:19.351079       1 main.go:301] handling current node
	I1129 09:20:29.355818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:20:29.355931       1 main.go:301] handling current node
	I1129 09:20:39.353345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:20:39.353464       1 main.go:301] handling current node
	I1129 09:20:49.354651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:20:49.354698       1 main.go:301] handling current node
	I1129 09:20:59.348656       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:20:59.348922       1 main.go:301] handling current node
	I1129 09:21:09.354207       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:21:09.354242       1 main.go:301] handling current node
	I1129 09:21:19.356537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:21:19.356636       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a] <==
	E1129 09:17:09.630888       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.251.250:443: connect: connection refused" logger="UnhandledError"
	W1129 09:17:34.190657       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 09:17:34.190753       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1129 09:17:34.190764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1129 09:17:34.193920       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 09:17:34.193965       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1129 09:17:34.193978       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1129 09:17:55.987664       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.255.142:443: connect: connection refused" logger="UnhandledError"
	W1129 09:17:55.987850       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 09:17:55.987905       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1129 09:17:55.990497       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.255.142:443: connect: connection refused" logger="UnhandledError"
	E1129 09:17:55.993688       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.255.142:443: connect: connection refused" logger="UnhandledError"
	I1129 09:17:56.142588       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1129 09:18:33.449554       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53788: use of closed network connection
	E1129 09:18:33.584244       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53810: use of closed network connection
	I1129 09:19:01.801625       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1129 09:19:02.107634       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.228.65"}
	I1129 09:19:22.661958       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1129 09:21:22.717820       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.233.15"}
	
	
	==> kube-controller-manager [c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7] <==
	I1129 09:16:27.337962       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:16:27.354267       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:16:27.354310       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:16:27.354331       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:16:27.354540       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:16:27.355035       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:16:27.355071       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:16:27.355105       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:16:27.355168       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:16:27.355922       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:16:27.356105       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:16:27.356180       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:16:27.358507       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:16:27.359060       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:16:27.359284       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	E1129 09:16:57.316744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1129 09:16:57.316906       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1129 09:16:57.316945       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1129 09:16:57.357569       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1129 09:16:57.362767       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1129 09:16:57.417468       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:16:57.463174       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:12.319339       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1129 09:17:27.422871       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1129 09:17:27.471559       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4] <==
	I1129 09:16:29.405538       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:16:29.493257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:16:29.597904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:16:29.598217       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 09:16:29.598289       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:16:29.655179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:29.655239       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:16:29.659655       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:16:29.659928       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:16:29.659942       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:29.661328       1 config.go:200] "Starting service config controller"
	I1129 09:16:29.661338       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:16:29.661354       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:16:29.661358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:16:29.661376       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:16:29.661380       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:16:29.662038       1 config.go:309] "Starting node config controller"
	I1129 09:16:29.662045       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:16:29.662051       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:16:29.761928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:16:29.761963       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:16:29.762002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a] <==
	I1129 09:16:19.543323       1 serving.go:386] Generated self-signed cert in-memory
	W1129 09:16:21.300344       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:16:21.300441       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:16:21.300474       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:16:21.300530       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:16:21.326670       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:16:21.326792       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:21.329496       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:16:21.329764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:21.329789       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:21.329807       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:16:21.430137       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:19:52 addons-937561 kubelet[1275]: I1129 09:19:52.558634    1275 reconciler_common.go:299] "Volume detached for volume \"pvc-2d3d17d1-d639-46d5-8a95-0c09bad44373\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8a795e68-cd04-11f0-b67f-d285dfa64f6a\") on node \"addons-937561\" DevicePath \"\""
	Nov 29 09:19:54 addons-937561 kubelet[1275]: I1129 09:19:54.461425    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2847890d-2340-4a02-8c08-2900e5149e57" path="/var/lib/kubelet/pods/2847890d-2340-4a02-8c08-2900e5149e57/volumes"
	Nov 29 09:20:17 addons-937561 kubelet[1275]: I1129 09:20:17.457817    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-9wb6d" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 09:20:22 addons-937561 kubelet[1275]: E1129 09:20:22.567181    1275 manager.go:1116] Failed to create existing container: /docker/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/crio-e17d34ee1e1e08413be7ddf8ab33916cc7f15dcd294d582ed2d67822c8e0f441: Error finding container e17d34ee1e1e08413be7ddf8ab33916cc7f15dcd294d582ed2d67822c8e0f441: Status 404 returned error can't find the container with id e17d34ee1e1e08413be7ddf8ab33916cc7f15dcd294d582ed2d67822c8e0f441
	Nov 29 09:20:22 addons-937561 kubelet[1275]: E1129 09:20:22.567435    1275 manager.go:1116] Failed to create existing container: /docker/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/crio-c760cbee09085fbc32c5840a8fa2e4bbef7ebf85fca25c220b59da3bf294e8ab: Error finding container c760cbee09085fbc32c5840a8fa2e4bbef7ebf85fca25c220b59da3bf294e8ab: Status 404 returned error can't find the container with id c760cbee09085fbc32c5840a8fa2e4bbef7ebf85fca25c220b59da3bf294e8ab
	Nov 29 09:20:22 addons-937561 kubelet[1275]: E1129 09:20:22.567623    1275 manager.go:1116] Failed to create existing container: /crio-e17d34ee1e1e08413be7ddf8ab33916cc7f15dcd294d582ed2d67822c8e0f441: Error finding container e17d34ee1e1e08413be7ddf8ab33916cc7f15dcd294d582ed2d67822c8e0f441: Status 404 returned error can't find the container with id e17d34ee1e1e08413be7ddf8ab33916cc7f15dcd294d582ed2d67822c8e0f441
	Nov 29 09:20:22 addons-937561 kubelet[1275]: E1129 09:20:22.573079    1275 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cce637cce507a8215d96ded6a69abb308bbfef2be51b80e6bee202f8836586af/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cce637cce507a8215d96ded6a69abb308bbfef2be51b80e6bee202f8836586af/diff: no such file or directory, extraDiskErr: <nil>
	Nov 29 09:20:43 addons-937561 kubelet[1275]: I1129 09:20:43.458389    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5t68c" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 09:20:48 addons-937561 kubelet[1275]: I1129 09:20:48.458192    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2kd5l" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 09:21:19 addons-937561 kubelet[1275]: I1129 09:21:19.958714    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8q8xm" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 09:21:21 addons-937561 kubelet[1275]: I1129 09:21:21.821652    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8q8xm" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 09:21:21 addons-937561 kubelet[1275]: I1129 09:21:21.822180    1275 scope.go:117] "RemoveContainer" containerID="473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca"
	Nov 29 09:21:22 addons-937561 kubelet[1275]: I1129 09:21:22.569564    1275 scope.go:117] "RemoveContainer" containerID="473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca"
	Nov 29 09:21:22 addons-937561 kubelet[1275]: E1129 09:21:22.643675    1275 manager.go:1116] Failed to create existing container: /docker/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/crio/crio-473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca: Error finding container 473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca: Status 404 returned error can't find the container with id 473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca
	Nov 29 09:21:22 addons-937561 kubelet[1275]: I1129 09:21:22.645152    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1bd65aff-9433-4ece-b991-660b39b767ce-gcp-creds\") pod \"hello-world-app-5d498dc89-64z4j\" (UID: \"1bd65aff-9433-4ece-b991-660b39b767ce\") " pod="default/hello-world-app-5d498dc89-64z4j"
	Nov 29 09:21:22 addons-937561 kubelet[1275]: I1129 09:21:22.645200    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ql4\" (UniqueName: \"kubernetes.io/projected/1bd65aff-9433-4ece-b991-660b39b767ce-kube-api-access-k9ql4\") pod \"hello-world-app-5d498dc89-64z4j\" (UID: \"1bd65aff-9433-4ece-b991-660b39b767ce\") " pod="default/hello-world-app-5d498dc89-64z4j"
	Nov 29 09:21:22 addons-937561 kubelet[1275]: E1129 09:21:22.658385    1275 manager.go:1116] Failed to create existing container: /crio/crio-473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca: Error finding container 473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca: Status 404 returned error can't find the container with id 473a5b038dd9401e08ff374625c21c5641134ab4764d94c3e59b2303fa9e50ca
	Nov 29 09:21:22 addons-937561 kubelet[1275]: I1129 09:21:22.827460    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8q8xm" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 09:21:22 addons-937561 kubelet[1275]: I1129 09:21:22.827953    1275 scope.go:117] "RemoveContainer" containerID="1499838d046cc09b18e58997d96df4911fc101f5b55283ff0952a7cab3eb86a0"
	Nov 29 09:21:22 addons-937561 kubelet[1275]: E1129 09:21:22.828232    1275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8q8xm_kube-system(4407cb92-93a5-4523-b1da-d85a945d9fb8)\"" pod="kube-system/registry-creds-764b6fb674-8q8xm" podUID="4407cb92-93a5-4523-b1da-d85a945d9fb8"
	Nov 29 09:21:22 addons-937561 kubelet[1275]: W1129 09:21:22.893982    1275 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/crio-00a4fcec55c6c33f9a7fc3a99f46430c47c172778d905d39d8db2688df58e270 WatchSource:0}: Error finding container 00a4fcec55c6c33f9a7fc3a99f46430c47c172778d905d39d8db2688df58e270: Status 404 returned error can't find the container with id 00a4fcec55c6c33f9a7fc3a99f46430c47c172778d905d39d8db2688df58e270
	Nov 29 09:21:23 addons-937561 kubelet[1275]: I1129 09:21:23.832699    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8q8xm" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 09:21:23 addons-937561 kubelet[1275]: I1129 09:21:23.832755    1275 scope.go:117] "RemoveContainer" containerID="1499838d046cc09b18e58997d96df4911fc101f5b55283ff0952a7cab3eb86a0"
	Nov 29 09:21:23 addons-937561 kubelet[1275]: E1129 09:21:23.832919    1275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8q8xm_kube-system(4407cb92-93a5-4523-b1da-d85a945d9fb8)\"" pod="kube-system/registry-creds-764b6fb674-8q8xm" podUID="4407cb92-93a5-4523-b1da-d85a945d9fb8"
	Nov 29 09:21:23 addons-937561 kubelet[1275]: I1129 09:21:23.917701    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-64z4j" podStartSLOduration=1.184701203 podStartE2EDuration="1.91768051s" podCreationTimestamp="2025-11-29 09:21:22 +0000 UTC" firstStartedPulling="2025-11-29 09:21:22.90123902 +0000 UTC m=+300.641193670" lastFinishedPulling="2025-11-29 09:21:23.634218337 +0000 UTC m=+301.374172977" observedRunningTime="2025-11-29 09:21:23.865905824 +0000 UTC m=+301.605860465" watchObservedRunningTime="2025-11-29 09:21:23.91768051 +0000 UTC m=+301.657635167"
	
	
	==> storage-provisioner [d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c] <==
	W1129 09:20:59.606790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:01.610298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:01.614920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:03.617933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:03.624865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:05.627635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:05.632246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:07.635229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:07.639979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:09.642944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:09.649416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:11.653373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:11.660433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:13.663994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:13.670815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:15.673630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:15.678494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:17.681418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:17.686026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:19.688701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:19.693014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:21.696933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:21.701604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:23.705074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:21:23.710813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-937561 -n addons-937561
helpers_test.go:269: (dbg) Run:  kubectl --context addons-937561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-937561 describe pod ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-937561 describe pod ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q: exit status 1 (121.082865ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cnhs2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t6l5q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-937561 describe pod ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (293.056916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:21:25.996983  312622 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:21:25.997710  312622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:25.997748  312622 out.go:374] Setting ErrFile to fd 2...
	I1129 09:21:25.997773  312622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:25.998058  312622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:21:25.998441  312622 mustload.go:66] Loading cluster: addons-937561
	I1129 09:21:25.998871  312622 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:21:25.998916  312622 addons.go:622] checking whether the cluster is paused
	I1129 09:21:25.999051  312622 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:21:25.999086  312622 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:21:25.999722  312622 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:21:26.028291  312622 ssh_runner.go:195] Run: systemctl --version
	I1129 09:21:26.028354  312622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:21:26.050822  312622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:21:26.156878  312622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:21:26.156978  312622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:21:26.192221  312622 cri.go:89] found id: "1499838d046cc09b18e58997d96df4911fc101f5b55283ff0952a7cab3eb86a0"
	I1129 09:21:26.192244  312622 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:21:26.192249  312622 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:21:26.192252  312622 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:21:26.192256  312622 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:21:26.192259  312622 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:21:26.192262  312622 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:21:26.192265  312622 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:21:26.192268  312622 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:21:26.192274  312622 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:21:26.192278  312622 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:21:26.192281  312622 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:21:26.192285  312622 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:21:26.192289  312622 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:21:26.192292  312622 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:21:26.192303  312622 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:21:26.192310  312622 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:21:26.192318  312622 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:21:26.192322  312622 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:21:26.192324  312622 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:21:26.192329  312622 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:21:26.192332  312622 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:21:26.192336  312622 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:21:26.192339  312622 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:21:26.192342  312622 cri.go:89] found id: ""
	I1129 09:21:26.192400  312622 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:21:26.208297  312622 out.go:203] 
	W1129 09:21:26.211290  312622 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:21:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:21:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:21:26.211322  312622 out.go:285] * 
	* 
	W1129 09:21:26.218662  312622 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:21:26.221571  312622 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable ingress --alsologtostderr -v=1: exit status 11 (268.487101ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:21:26.283791  312667 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:21:26.284472  312667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:26.284487  312667 out.go:374] Setting ErrFile to fd 2...
	I1129 09:21:26.284494  312667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:26.284819  312667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:21:26.285176  312667 mustload.go:66] Loading cluster: addons-937561
	I1129 09:21:26.285602  312667 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:21:26.285618  312667 addons.go:622] checking whether the cluster is paused
	I1129 09:21:26.285781  312667 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:21:26.285898  312667 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:21:26.286574  312667 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:21:26.304308  312667 ssh_runner.go:195] Run: systemctl --version
	I1129 09:21:26.304373  312667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:21:26.321955  312667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:21:26.424781  312667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:21:26.424872  312667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:21:26.462287  312667 cri.go:89] found id: "1499838d046cc09b18e58997d96df4911fc101f5b55283ff0952a7cab3eb86a0"
	I1129 09:21:26.462311  312667 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:21:26.462316  312667 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:21:26.462320  312667 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:21:26.462324  312667 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:21:26.462327  312667 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:21:26.462330  312667 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:21:26.462333  312667 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:21:26.462337  312667 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:21:26.462343  312667 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:21:26.462346  312667 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:21:26.462349  312667 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:21:26.462352  312667 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:21:26.462356  312667 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:21:26.462359  312667 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:21:26.462368  312667 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:21:26.462375  312667 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:21:26.462380  312667 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:21:26.462383  312667 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:21:26.462386  312667 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:21:26.462391  312667 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:21:26.462397  312667 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:21:26.462400  312667 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:21:26.462404  312667 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:21:26.462411  312667 cri.go:89] found id: ""
	I1129 09:21:26.462466  312667 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:21:26.477608  312667 out.go:203] 
	W1129 09:21:26.480523  312667 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:21:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:21:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:21:26.480560  312667 out.go:285] * 
	* 
	W1129 09:21:26.487009  312667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:21:26.489986  312667 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-dz9wn" [8fea0fbd-f422-435c-9a85-26dc59c620df] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003996292s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (260.40138ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:19:58.598525  311510 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:19:58.599373  311510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:58.599417  311510 out.go:374] Setting ErrFile to fd 2...
	I1129 09:19:58.599437  311510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:58.599720  311510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:19:58.600040  311510 mustload.go:66] Loading cluster: addons-937561
	I1129 09:19:58.600506  311510 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:58.600552  311510 addons.go:622] checking whether the cluster is paused
	I1129 09:19:58.600688  311510 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:58.600725  311510 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:19:58.601252  311510 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:19:58.619932  311510 ssh_runner.go:195] Run: systemctl --version
	I1129 09:19:58.619982  311510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:19:58.641115  311510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:19:58.748814  311510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:19:58.748909  311510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:19:58.779123  311510 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:19:58.779144  311510 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:19:58.779148  311510 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:19:58.779153  311510 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:19:58.779156  311510 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:19:58.779159  311510 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:19:58.779162  311510 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:19:58.779165  311510 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:19:58.779170  311510 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:19:58.779176  311510 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:19:58.779179  311510 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:19:58.779186  311510 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:19:58.779196  311510 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:19:58.779199  311510 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:19:58.779206  311510 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:19:58.779211  311510 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:19:58.779219  311510 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:19:58.779229  311510 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:19:58.779233  311510 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:19:58.779236  311510 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:19:58.779240  311510 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:19:58.779246  311510 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:19:58.779249  311510 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:19:58.779255  311510 cri.go:89] found id: ""
	I1129 09:19:58.779308  311510 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:19:58.794562  311510 out.go:203] 
	W1129 09:19:58.797566  311510 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:19:58.797593  311510 out.go:285] * 
	* 
	W1129 09:19:58.803966  311510 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:19:58.806953  311510 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.297644ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003681603s
addons_test.go:463: (dbg) Run:  kubectl --context addons-937561 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (275.763478ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:19:01.255826  310325 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:19:01.256683  310325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:01.256729  310325 out.go:374] Setting ErrFile to fd 2...
	I1129 09:19:01.256755  310325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:01.257638  310325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:19:01.258039  310325 mustload.go:66] Loading cluster: addons-937561
	I1129 09:19:01.258559  310325 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:01.258608  310325 addons.go:622] checking whether the cluster is paused
	I1129 09:19:01.258740  310325 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:01.258775  310325 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:19:01.259346  310325 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:19:01.277459  310325 ssh_runner.go:195] Run: systemctl --version
	I1129 09:19:01.277513  310325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:19:01.308987  310325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:19:01.417351  310325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:19:01.417438  310325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:19:01.448679  310325 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:19:01.448704  310325 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:19:01.448710  310325 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:19:01.448714  310325 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:19:01.448717  310325 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:19:01.448721  310325 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:19:01.448724  310325 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:19:01.448727  310325 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:19:01.448730  310325 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:19:01.448736  310325 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:19:01.448740  310325 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:19:01.448743  310325 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:19:01.448746  310325 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:19:01.448749  310325 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:19:01.448752  310325 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:19:01.448761  310325 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:19:01.448765  310325 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:19:01.448770  310325 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:19:01.448773  310325 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:19:01.448776  310325 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:19:01.448781  310325 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:19:01.448784  310325 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:19:01.448786  310325 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:19:01.448789  310325 cri.go:89] found id: ""
	I1129 09:19:01.448851  310325 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:19:01.465345  310325 out.go:203] 
	W1129 09:19:01.468645  310325 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:19:01.468671  310325 out.go:285] * 
	* 
	W1129 09:19:01.475114  310325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:19:01.478233  310325 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1129 09:18:58.044901  302182 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1129 09:18:58.049718  302182 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1129 09:18:58.049744  302182 kapi.go:107] duration metric: took 4.856656ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.865608ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-937561 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-937561 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6f44b958-d057-4b9c-8e3c-36921c080854] Pending
helpers_test.go:352: "task-pv-pod" [6f44b958-d057-4b9c-8e3c-36921c080854] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [6f44b958-d057-4b9c-8e3c-36921c080854] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.0039572s
addons_test.go:572: (dbg) Run:  kubectl --context addons-937561 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-937561 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-937561 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-937561 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-937561 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-937561 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-937561 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2847890d-2340-4a02-8c08-2900e5149e57] Pending
helpers_test.go:352: "task-pv-pod-restore" [2847890d-2340-4a02-8c08-2900e5149e57] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2847890d-2340-4a02-8c08-2900e5149e57] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00378495s
addons_test.go:614: (dbg) Run:  kubectl --context addons-937561 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-937561 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-937561 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (282.27548ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:19:52.972288  311392 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:19:52.973075  311392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:52.973090  311392 out.go:374] Setting ErrFile to fd 2...
	I1129 09:19:52.973096  311392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:52.973376  311392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:19:52.973687  311392 mustload.go:66] Loading cluster: addons-937561
	I1129 09:19:52.974127  311392 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:52.974148  311392 addons.go:622] checking whether the cluster is paused
	I1129 09:19:52.974266  311392 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:52.974283  311392 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:19:52.974786  311392 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:19:52.992460  311392 ssh_runner.go:195] Run: systemctl --version
	I1129 09:19:52.992518  311392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:19:53.013322  311392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:19:53.121926  311392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:19:53.122115  311392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:19:53.162840  311392 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:19:53.162870  311392 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:19:53.162875  311392 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:19:53.162879  311392 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:19:53.162882  311392 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:19:53.162885  311392 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:19:53.162888  311392 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:19:53.162891  311392 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:19:53.162894  311392 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:19:53.162901  311392 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:19:53.162904  311392 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:19:53.162907  311392 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:19:53.162910  311392 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:19:53.162913  311392 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:19:53.162916  311392 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:19:53.162922  311392 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:19:53.162928  311392 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:19:53.162932  311392 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:19:53.162936  311392 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:19:53.162939  311392 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:19:53.162950  311392 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:19:53.162958  311392 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:19:53.162962  311392 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:19:53.162965  311392 cri.go:89] found id: ""
	I1129 09:19:53.163026  311392 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:19:53.190296  311392 out.go:203] 
	W1129 09:19:53.193314  311392 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:19:53.193345  311392 out.go:285] * 
	* 
	W1129 09:19:53.199774  311392 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:19:53.203708  311392 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (336.889712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:19:53.284948  311442 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:19:53.285924  311442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:53.285935  311442 out.go:374] Setting ErrFile to fd 2...
	I1129 09:19:53.285940  311442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:19:53.286410  311442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:19:53.286830  311442 mustload.go:66] Loading cluster: addons-937561
	I1129 09:19:53.287387  311442 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:53.287403  311442 addons.go:622] checking whether the cluster is paused
	I1129 09:19:53.287527  311442 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:19:53.287537  311442 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:19:53.288068  311442 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:19:53.315095  311442 ssh_runner.go:195] Run: systemctl --version
	I1129 09:19:53.315169  311442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:19:53.333683  311442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:19:53.457028  311442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:19:53.457114  311442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:19:53.506346  311442 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:19:53.506371  311442 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:19:53.506376  311442 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:19:53.506380  311442 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:19:53.506383  311442 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:19:53.506387  311442 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:19:53.506390  311442 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:19:53.506393  311442 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:19:53.506397  311442 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:19:53.506407  311442 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:19:53.506411  311442 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:19:53.506414  311442 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:19:53.506417  311442 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:19:53.506420  311442 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:19:53.506423  311442 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:19:53.506429  311442 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:19:53.506432  311442 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:19:53.506439  311442 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:19:53.506442  311442 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:19:53.506445  311442 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:19:53.506450  311442 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:19:53.506453  311442 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:19:53.506456  311442 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:19:53.506459  311442 cri.go:89] found id: ""
	I1129 09:19:53.506514  311442 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:19:53.528683  311442 out.go:203] 
	W1129 09:19:53.531636  311442 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:19:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:19:53.531661  311442 out.go:285] * 
	* 
	W1129 09:19:53.538239  311442 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:19:53.541163  311442 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (55.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-937561 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-937561 --alsologtostderr -v=1: exit status 11 (274.76184ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:33.913357  309125 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:33.914215  309125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:33.914254  309125 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:33.914275  309125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:33.914570  309125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:33.914906  309125 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:33.915335  309125 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:33.915384  309125 addons.go:622] checking whether the cluster is paused
	I1129 09:18:33.915523  309125 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:33.915558  309125 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:33.916104  309125 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:33.934067  309125 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:33.934225  309125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:33.950798  309125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:34.056679  309125 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:34.056775  309125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:34.089018  309125 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:34.089042  309125 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:34.089048  309125 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:34.089053  309125 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:34.089056  309125 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:34.089060  309125 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:34.089064  309125 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:34.089103  309125 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:34.089108  309125 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:34.089140  309125 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:34.089150  309125 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:34.089154  309125 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:34.089158  309125 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:34.089161  309125 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:34.089178  309125 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:34.089185  309125 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:34.089194  309125 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:34.089200  309125 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:34.089204  309125 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:34.089207  309125 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:34.089212  309125 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:34.089215  309125 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:34.089219  309125 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:34.089222  309125 cri.go:89] found id: ""
	I1129 09:18:34.089288  309125 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:34.104294  309125 out.go:203] 
	W1129 09:18:34.107215  309125 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:34.107241  309125 out.go:285] * 
	* 
	W1129 09:18:34.113556  309125 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:34.116381  309125 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-937561 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-937561
helpers_test.go:243: (dbg) docker inspect addons-937561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f",
	        "Created": "2025-11-29T09:15:56.838859923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:15:56.897017523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f-json.log",
	        "Name": "/addons-937561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-937561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-937561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f",
	                "LowerDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc61182fa3e7ada400d5669550582e348f808faae895f982748bff07fc40711a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-937561",
	                "Source": "/var/lib/docker/volumes/addons-937561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-937561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-937561",
	                "name.minikube.sigs.k8s.io": "addons-937561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a27dc52507f57398d147fcfa5124c353acbc3c332b2bc79354c09e1567200156",
	            "SandboxKey": "/var/run/docker/netns/a27dc52507f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-937561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:c1:ed:d0:3f:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "52934f66aab693c83ce51ab1a5dca17dee70ef0f2d4c5842285e8c8d9c8754bd",
	                    "EndpointID": "ef0cc4a9f52e9e4d652212f30657b75719c3a5dff085e2275aae9fb77e1aafd6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-937561",
	                        "ff16db5210e7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-937561 -n addons-937561
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-937561 logs -n 25: (1.410761424s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-574220 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-574220   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p download-only-574220                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-574220   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -o=json --download-only -p download-only-777977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-777977   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p download-only-777977                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-777977   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p download-only-574220                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-574220   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p download-only-777977                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-777977   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ --download-only -p download-docker-753424 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-753424 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ -p download-docker-753424                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-753424 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ --download-only -p binary-mirror-549171 --alsologtostderr --binary-mirror http://127.0.0.1:40279 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-549171   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ -p binary-mirror-549171                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-549171   │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ addons  │ enable dashboard -p addons-937561                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-937561                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ start   │ -p addons-937561 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:18 UTC │
	│ addons  │ addons-937561 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ addons-937561 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ addons  │ enable headlamp -p addons-937561 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-937561          │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:15:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:15:31.087094  302940 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:15:31.087226  302940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:31.087232  302940 out.go:374] Setting ErrFile to fd 2...
	I1129 09:15:31.087237  302940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:31.087504  302940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:15:31.087954  302940 out.go:368] Setting JSON to false
	I1129 09:15:31.088761  302940 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7080,"bootTime":1764400651,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 09:15:31.088836  302940 start.go:143] virtualization:  
	I1129 09:15:31.092210  302940 out.go:179] * [addons-937561] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:15:31.096145  302940 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:15:31.096277  302940 notify.go:221] Checking for updates...
	I1129 09:15:31.102111  302940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:15:31.105097  302940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:15:31.107935  302940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 09:15:31.110919  302940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:15:31.113798  302940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:15:31.117041  302940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:15:31.150923  302940 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:15:31.151062  302940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:31.211003  302940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-29 09:15:31.202097401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:31.211112  302940 docker.go:319] overlay module found
	I1129 09:15:31.214320  302940 out.go:179] * Using the docker driver based on user configuration
	I1129 09:15:31.217102  302940 start.go:309] selected driver: docker
	I1129 09:15:31.217122  302940 start.go:927] validating driver "docker" against <nil>
	I1129 09:15:31.217135  302940 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:15:31.217862  302940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:31.280116  302940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-29 09:15:31.271255155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:31.280279  302940 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:15:31.280502  302940 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:15:31.283467  302940 out.go:179] * Using Docker driver with root privileges
	I1129 09:15:31.286304  302940 cni.go:84] Creating CNI manager for ""
	I1129 09:15:31.286380  302940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:15:31.286393  302940 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:15:31.286481  302940 start.go:353] cluster config:
	{Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1129 09:15:31.291461  302940 out.go:179] * Starting "addons-937561" primary control-plane node in "addons-937561" cluster
	I1129 09:15:31.294241  302940 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:15:31.297285  302940 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:15:31.300142  302940 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:31.300194  302940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 09:15:31.300204  302940 cache.go:65] Caching tarball of preloaded images
	I1129 09:15:31.300226  302940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:15:31.300304  302940 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 09:15:31.300316  302940 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:15:31.300669  302940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/config.json ...
	I1129 09:15:31.300704  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/config.json: {Name:mk4be157a7892880b738be8e763cf0724c47d991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:15:31.315940  302940 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 09:15:31.316068  302940 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 09:15:31.316086  302940 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1129 09:15:31.316090  302940 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1129 09:15:31.316097  302940 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1129 09:15:31.316102  302940 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1129 09:15:49.494835  302940 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1129 09:15:49.494880  302940 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:15:49.494934  302940 start.go:360] acquireMachinesLock for addons-937561: {Name:mk9fc399e1321a9643dc794a9b0f9e90e1914dc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:15:49.495681  302940 start.go:364] duration metric: took 724.358µs to acquireMachinesLock for "addons-937561"
	I1129 09:15:49.495721  302940 start.go:93] Provisioning new machine with config: &{Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:15:49.495800  302940 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:15:49.499115  302940 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1129 09:15:49.499362  302940 start.go:159] libmachine.API.Create for "addons-937561" (driver="docker")
	I1129 09:15:49.499401  302940 client.go:173] LocalClient.Create starting
	I1129 09:15:49.499514  302940 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 09:15:49.764545  302940 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 09:15:49.959033  302940 cli_runner.go:164] Run: docker network inspect addons-937561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:15:49.973756  302940 cli_runner.go:211] docker network inspect addons-937561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:15:49.973837  302940 network_create.go:284] running [docker network inspect addons-937561] to gather additional debugging logs...
	I1129 09:15:49.973859  302940 cli_runner.go:164] Run: docker network inspect addons-937561
	W1129 09:15:49.989712  302940 cli_runner.go:211] docker network inspect addons-937561 returned with exit code 1
	I1129 09:15:49.989744  302940 network_create.go:287] error running [docker network inspect addons-937561]: docker network inspect addons-937561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-937561 not found
	I1129 09:15:49.989758  302940 network_create.go:289] output of [docker network inspect addons-937561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-937561 not found
	
	** /stderr **
	I1129 09:15:49.989851  302940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:15:50.005515  302940 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a74000}
	I1129 09:15:50.005559  302940 network_create.go:124] attempt to create docker network addons-937561 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1129 09:15:50.005620  302940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-937561 addons-937561
	I1129 09:15:50.073094  302940 network_create.go:108] docker network addons-937561 192.168.49.0/24 created
	I1129 09:15:50.073129  302940 kic.go:121] calculated static IP "192.168.49.2" for the "addons-937561" container
	I1129 09:15:50.073210  302940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:15:50.090005  302940 cli_runner.go:164] Run: docker volume create addons-937561 --label name.minikube.sigs.k8s.io=addons-937561 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:15:50.109089  302940 oci.go:103] Successfully created a docker volume addons-937561
	I1129 09:15:50.109184  302940 cli_runner.go:164] Run: docker run --rm --name addons-937561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-937561 --entrypoint /usr/bin/test -v addons-937561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:15:52.348789  302940 cli_runner.go:217] Completed: docker run --rm --name addons-937561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-937561 --entrypoint /usr/bin/test -v addons-937561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.239554653s)
	I1129 09:15:52.348822  302940 oci.go:107] Successfully prepared a docker volume addons-937561
	I1129 09:15:52.348862  302940 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:52.348880  302940 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:15:52.348951  302940 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-937561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:15:56.765350  302940 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-937561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.41635727s)
	I1129 09:15:56.765385  302940 kic.go:203] duration metric: took 4.416501173s to extract preloaded images to volume ...
	W1129 09:15:56.765532  302940 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 09:15:56.765655  302940 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:15:56.824268  302940 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-937561 --name addons-937561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-937561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-937561 --network addons-937561 --ip 192.168.49.2 --volume addons-937561:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:15:57.129147  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Running}}
	I1129 09:15:57.156908  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:15:57.176840  302940 cli_runner.go:164] Run: docker exec addons-937561 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:15:57.241573  302940 oci.go:144] the created container "addons-937561" has a running status.
	I1129 09:15:57.241601  302940 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa...
	I1129 09:15:57.472838  302940 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:15:57.498387  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:15:57.531990  302940 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:15:57.532015  302940 kic_runner.go:114] Args: [docker exec --privileged addons-937561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:15:57.604324  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:15:57.628638  302940 machine.go:94] provisionDockerMachine start ...
	I1129 09:15:57.628736  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:15:57.646554  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:57.646879  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:15:57.646889  302940 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:15:57.647522  302940 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:16:00.797501  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-937561
	
	I1129 09:16:00.797523  302940 ubuntu.go:182] provisioning hostname "addons-937561"
	I1129 09:16:00.797587  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:00.815422  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:00.815742  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:16:00.815757  302940 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-937561 && echo "addons-937561" | sudo tee /etc/hostname
	I1129 09:16:00.977309  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-937561
	
	I1129 09:16:00.977462  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:00.993782  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:00.994277  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:16:00.994308  302940 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-937561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-937561/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-937561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:16:01.149184  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:16:01.149260  302940 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 09:16:01.149332  302940 ubuntu.go:190] setting up certificates
	I1129 09:16:01.149361  302940 provision.go:84] configureAuth start
	I1129 09:16:01.149433  302940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-937561
	I1129 09:16:01.167373  302940 provision.go:143] copyHostCerts
	I1129 09:16:01.167469  302940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 09:16:01.167618  302940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 09:16:01.167682  302940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 09:16:01.167736  302940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.addons-937561 san=[127.0.0.1 192.168.49.2 addons-937561 localhost minikube]
	I1129 09:16:01.451675  302940 provision.go:177] copyRemoteCerts
	I1129 09:16:01.451742  302940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:16:01.451785  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:01.470989  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:01.577793  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 09:16:01.595280  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:16:01.612815  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:16:01.631214  302940 provision.go:87] duration metric: took 481.823107ms to configureAuth
	I1129 09:16:01.631281  302940 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:16:01.631500  302940 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:01.631614  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:01.648958  302940 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:01.649271  302940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1129 09:16:01.649291  302940 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:16:01.953439  302940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:16:01.953462  302940 machine.go:97] duration metric: took 4.324804462s to provisionDockerMachine
	I1129 09:16:01.953472  302940 client.go:176] duration metric: took 12.45406039s to LocalClient.Create
	I1129 09:16:01.953485  302940 start.go:167] duration metric: took 12.454124622s to libmachine.API.Create "addons-937561"
	I1129 09:16:01.953492  302940 start.go:293] postStartSetup for "addons-937561" (driver="docker")
	I1129 09:16:01.953505  302940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:16:01.953579  302940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:16:01.953624  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:01.971246  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.078464  302940 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:16:02.081991  302940 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:16:02.082023  302940 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:16:02.082035  302940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 09:16:02.082115  302940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 09:16:02.082147  302940 start.go:296] duration metric: took 128.646046ms for postStartSetup
	I1129 09:16:02.082471  302940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-937561
	I1129 09:16:02.099306  302940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/config.json ...
	I1129 09:16:02.099596  302940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:16:02.099654  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:02.116127  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.219859  302940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:16:02.225381  302940 start.go:128] duration metric: took 12.729565135s to createHost
	I1129 09:16:02.225407  302940 start.go:83] releasing machines lock for "addons-937561", held for 12.729707348s
	I1129 09:16:02.225478  302940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-937561
	I1129 09:16:02.242408  302940 ssh_runner.go:195] Run: cat /version.json
	I1129 09:16:02.242464  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:02.242478  302940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:16:02.242543  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:02.266297  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.270367  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:02.459603  302940 ssh_runner.go:195] Run: systemctl --version
	I1129 09:16:02.465802  302940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:16:02.499964  302940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:16:02.504393  302940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:16:02.504485  302940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:16:02.532984  302940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 09:16:02.533061  302940 start.go:496] detecting cgroup driver to use...
	I1129 09:16:02.533110  302940 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:16:02.533188  302940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:16:02.549698  302940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:16:02.562266  302940 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:16:02.562370  302940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:16:02.579832  302940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:16:02.598229  302940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:16:02.716223  302940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:16:02.834559  302940 docker.go:234] disabling docker service ...
	I1129 09:16:02.834626  302940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:16:02.855106  302940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:16:02.868601  302940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:16:02.982745  302940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:16:03.100636  302940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:16:03.113082  302940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:16:03.127676  302940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:16:03.127754  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.136537  302940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 09:16:03.136660  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.146027  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.154870  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.164743  302940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:16:03.173081  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.182214  302940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.195425  302940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:03.204090  302940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:16:03.211718  302940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:16:03.218771  302940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:03.327937  302940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:16:03.484824  302940 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:16:03.484938  302940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:16:03.488701  302940 start.go:564] Will wait 60s for crictl version
	I1129 09:16:03.488813  302940 ssh_runner.go:195] Run: which crictl
	I1129 09:16:03.492292  302940 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:16:03.516250  302940 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:16:03.516390  302940 ssh_runner.go:195] Run: crio --version
	I1129 09:16:03.544947  302940 ssh_runner.go:195] Run: crio --version
	I1129 09:16:03.577724  302940 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:16:03.580551  302940 cli_runner.go:164] Run: docker network inspect addons-937561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:16:03.599298  302940 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1129 09:16:03.603010  302940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:03.612290  302940 kubeadm.go:884] updating cluster {Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:16:03.612416  302940 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:03.612470  302940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:03.648729  302940 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:03.648753  302940 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:16:03.648812  302940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:03.673844  302940 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:03.673868  302940 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:16:03.673877  302940 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1129 09:16:03.673966  302940 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-937561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:16:03.674050  302940 ssh_runner.go:195] Run: crio config
	I1129 09:16:03.736391  302940 cni.go:84] Creating CNI manager for ""
	I1129 09:16:03.736410  302940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:03.736427  302940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:16:03.736450  302940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-937561 NodeName:addons-937561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:16:03.736569  302940 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-937561"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:16:03.736642  302940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:16:03.744617  302940 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:16:03.744731  302940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:16:03.752208  302940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1129 09:16:03.765946  302940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:16:03.779464  302940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1129 09:16:03.791502  302940 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:16:03.794832  302940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:03.803809  302940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:03.917198  302940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:03.933103  302940 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561 for IP: 192.168.49.2
	I1129 09:16:03.933121  302940 certs.go:195] generating shared ca certs ...
	I1129 09:16:03.933137  302940 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:03.933955  302940 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 09:16:04.847330  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt ...
	I1129 09:16:04.847364  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt: {Name:mkac8d45d81f8728bae19fa79b1cb3f9b39b4bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:04.847599  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key ...
	I1129 09:16:04.847615  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key: {Name:mk9e27192e1fe89020239cee41fe7012ed7e494c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:04.847708  302940 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 09:16:05.155609  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt ...
	I1129 09:16:05.155642  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt: {Name:mkc058cc1db8a6826bb5a0bc0daef7850cfba061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.156429  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key ...
	I1129 09:16:05.156444  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key: {Name:mk376367515faf0510b70b573b593c791268b6cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.156534  302940 certs.go:257] generating profile certs ...
	I1129 09:16:05.156595  302940 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.key
	I1129 09:16:05.156611  302940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt with IP's: []
	I1129 09:16:05.421662  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt ...
	I1129 09:16:05.421695  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: {Name:mkc3397fa5a25a24bd5f51f2c5c4a606cc819664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.422577  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.key ...
	I1129 09:16:05.422597  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.key: {Name:mk4a9c073dc4557d5df42b1ae8c957dd5d02abb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.423354  302940 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb
	I1129 09:16:05.423384  302940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1129 09:16:05.605977  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb ...
	I1129 09:16:05.606011  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb: {Name:mk9b0c8d9e99cbe481159be628ca5b19b1897710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.606881  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb ...
	I1129 09:16:05.606908  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb: {Name:mk929b22cc4c62d9796b35083fc8d767ea3156c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.607616  302940 certs.go:382] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt.2f8e33eb -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt
	I1129 09:16:05.607730  302940 certs.go:386] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key.2f8e33eb -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key
	I1129 09:16:05.607824  302940 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key
	I1129 09:16:05.607873  302940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt with IP's: []
	I1129 09:16:05.761686  302940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt ...
	I1129 09:16:05.761722  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt: {Name:mk4eae6adbea52836e2a038870ccc1ea957c14a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.761894  302940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key ...
	I1129 09:16:05.761907  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key: {Name:mk39a6f1248b763f4c4ffd9fea8461ec3e28fcea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.762121  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:16:05.762167  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:16:05.762197  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:16:05.762234  302940 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 09:16:05.762788  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:16:05.783085  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:16:05.801923  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:16:05.819929  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:16:05.837433  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 09:16:05.853966  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:16:05.871709  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:16:05.888828  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:16:05.906601  302940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:16:05.924358  302940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:16:05.937889  302940 ssh_runner.go:195] Run: openssl version
	I1129 09:16:05.944268  302940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:16:05.952902  302940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.956693  302940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.956761  302940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.997704  302940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:16:06.013949  302940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:16:06.018518  302940 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:16:06.018574  302940 kubeadm.go:401] StartCluster: {Name:addons-937561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-937561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:06.018661  302940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:16:06.018735  302940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:16:06.047308  302940 cri.go:89] found id: ""
	I1129 09:16:06.047403  302940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:16:06.055714  302940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:16:06.063755  302940 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:16:06.063825  302940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:16:06.071749  302940 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:16:06.071770  302940 kubeadm.go:158] found existing configuration files:
	
	I1129 09:16:06.071846  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:16:06.079810  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:16:06.079929  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:16:06.087791  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:16:06.096102  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:16:06.096178  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:16:06.104109  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:16:06.111989  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:16:06.112055  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:16:06.119301  302940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:16:06.126933  302940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:16:06.126999  302940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:16:06.134250  302940 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:16:06.182497  302940 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:16:06.182813  302940 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:16:06.207474  302940 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:16:06.207551  302940 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 09:16:06.207590  302940 kubeadm.go:319] OS: Linux
	I1129 09:16:06.207638  302940 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:16:06.207687  302940 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 09:16:06.207736  302940 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:16:06.207786  302940 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:16:06.207836  302940 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:16:06.207885  302940 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:16:06.207946  302940 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:16:06.207996  302940 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:16:06.208044  302940 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 09:16:06.276367  302940 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:16:06.276476  302940 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:16:06.276566  302940 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:16:06.285910  302940 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:16:06.289141  302940 out.go:252]   - Generating certificates and keys ...
	I1129 09:16:06.289239  302940 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:16:06.289310  302940 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:16:06.434478  302940 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:16:06.844154  302940 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:16:07.548481  302940 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:16:07.891785  302940 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:16:08.917297  302940 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:16:08.917696  302940 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-937561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 09:16:09.576119  302940 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:16:09.576449  302940 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-937561 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 09:16:10.782564  302940 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:16:10.994698  302940 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:16:11.120656  302940 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:16:11.120930  302940 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:16:11.274092  302940 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:16:12.696326  302940 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:16:12.847704  302940 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:16:13.108716  302940 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:16:13.936240  302940 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:16:13.936813  302940 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:16:13.940123  302940 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:16:13.943536  302940 out.go:252]   - Booting up control plane ...
	I1129 09:16:13.943640  302940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:16:13.943721  302940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:16:13.944847  302940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:16:13.960435  302940 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:16:13.960635  302940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:16:13.968503  302940 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:16:13.968858  302940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:16:13.969100  302940 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:16:14.098382  302940 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:16:14.098518  302940 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:16:15.600331  302940 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502571416s
	I1129 09:16:15.604486  302940 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:16:15.604593  302940 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1129 09:16:15.604695  302940 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:16:15.604799  302940 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:16:18.258292  302940 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.65331495s
	I1129 09:16:21.363125  302940 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.758636574s
	I1129 09:16:21.607867  302940 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003192759s
	I1129 09:16:21.626959  302940 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:16:21.641981  302940 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:16:21.654701  302940 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:16:21.654908  302940 kubeadm.go:319] [mark-control-plane] Marking the node addons-937561 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:16:21.668336  302940 kubeadm.go:319] [bootstrap-token] Using token: h33wha.0mtwavoxaivfe568
	I1129 09:16:21.673356  302940 out.go:252]   - Configuring RBAC rules ...
	I1129 09:16:21.673488  302940 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:16:21.675523  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:16:21.683570  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:16:21.687703  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:16:21.691721  302940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:16:21.697840  302940 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:16:22.015548  302940 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:16:22.467145  302940 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:16:23.015241  302940 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:16:23.016427  302940 kubeadm.go:319] 
	I1129 09:16:23.016508  302940 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:16:23.016518  302940 kubeadm.go:319] 
	I1129 09:16:23.016596  302940 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:16:23.016604  302940 kubeadm.go:319] 
	I1129 09:16:23.016630  302940 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:16:23.016692  302940 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:16:23.016758  302940 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:16:23.016767  302940 kubeadm.go:319] 
	I1129 09:16:23.016821  302940 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:16:23.016829  302940 kubeadm.go:319] 
	I1129 09:16:23.016877  302940 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:16:23.016885  302940 kubeadm.go:319] 
	I1129 09:16:23.016937  302940 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:16:23.017016  302940 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:16:23.017089  302940 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:16:23.017098  302940 kubeadm.go:319] 
	I1129 09:16:23.017183  302940 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:16:23.017268  302940 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:16:23.017275  302940 kubeadm.go:319] 
	I1129 09:16:23.017359  302940 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h33wha.0mtwavoxaivfe568 \
	I1129 09:16:23.017473  302940 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 09:16:23.017498  302940 kubeadm.go:319] 	--control-plane 
	I1129 09:16:23.017506  302940 kubeadm.go:319] 
	I1129 09:16:23.017591  302940 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:16:23.017599  302940 kubeadm.go:319] 
	I1129 09:16:23.017681  302940 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h33wha.0mtwavoxaivfe568 \
	I1129 09:16:23.017790  302940 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 09:16:23.020649  302940 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 09:16:23.020878  302940 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 09:16:23.020987  302940 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:16:23.021005  302940 cni.go:84] Creating CNI manager for ""
	I1129 09:16:23.021013  302940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:23.024258  302940 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:16:23.027176  302940 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:16:23.031176  302940 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:16:23.031196  302940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:16:23.044969  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:16:23.324199  302940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:16:23.324318  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:23.324343  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-937561 minikube.k8s.io/updated_at=2025_11_29T09_16_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=addons-937561 minikube.k8s.io/primary=true
	I1129 09:16:23.470616  302940 ops.go:34] apiserver oom_adj: -16
	I1129 09:16:23.470741  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:23.971402  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:24.470917  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:24.970888  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:25.471759  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:25.971293  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:26.470976  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:26.970928  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:27.471002  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:27.971048  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:28.471307  302940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:16:28.580427  302940 kubeadm.go:1114] duration metric: took 5.256229152s to wait for elevateKubeSystemPrivileges
	I1129 09:16:28.580462  302940 kubeadm.go:403] duration metric: took 22.56189219s to StartCluster
	I1129 09:16:28.580479  302940 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:28.580588  302940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:16:28.580985  302940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:28.581852  302940 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:16:28.581994  302940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:16:28.582261  302940 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:28.582305  302940 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1129 09:16:28.582403  302940 addons.go:70] Setting yakd=true in profile "addons-937561"
	I1129 09:16:28.582424  302940 addons.go:239] Setting addon yakd=true in "addons-937561"
	I1129 09:16:28.582451  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.582949  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.583405  302940 addons.go:70] Setting metrics-server=true in profile "addons-937561"
	I1129 09:16:28.583424  302940 addons.go:239] Setting addon metrics-server=true in "addons-937561"
	I1129 09:16:28.583447  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.583456  302940 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-937561"
	I1129 09:16:28.583472  302940 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-937561"
	I1129 09:16:28.583493  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.583879  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.583899  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586170  302940 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-937561"
	I1129 09:16:28.586870  302940 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-937561"
	I1129 09:16:28.586911  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.587362  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586763  302940 addons.go:70] Setting registry=true in profile "addons-937561"
	I1129 09:16:28.589556  302940 addons.go:239] Setting addon registry=true in "addons-937561"
	I1129 09:16:28.589676  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.590843  302940 addons.go:70] Setting cloud-spanner=true in profile "addons-937561"
	I1129 09:16:28.596423  302940 addons.go:239] Setting addon cloud-spanner=true in "addons-937561"
	I1129 09:16:28.596524  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.596843  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.597072  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586788  302940 addons.go:70] Setting registry-creds=true in profile "addons-937561"
	I1129 09:16:28.608286  302940 addons.go:239] Setting addon registry-creds=true in "addons-937561"
	I1129 09:16:28.608330  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.586801  302940 addons.go:70] Setting storage-provisioner=true in profile "addons-937561"
	I1129 09:16:28.608579  302940 addons.go:239] Setting addon storage-provisioner=true in "addons-937561"
	I1129 09:16:28.608603  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.591004  302940 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-937561"
	I1129 09:16:28.608725  302940 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-937561"
	I1129 09:16:28.608767  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.609280  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.611691  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586815  302940 addons.go:70] Setting volcano=true in profile "addons-937561"
	I1129 09:16:28.618840  302940 addons.go:239] Setting addon volcano=true in "addons-937561"
	I1129 09:16:28.618877  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.619349  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586809  302940 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-937561"
	I1129 09:16:28.619587  302940 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-937561"
	I1129 09:16:28.620732  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.586820  302940 addons.go:70] Setting volumesnapshots=true in profile "addons-937561"
	I1129 09:16:28.641362  302940 addons.go:239] Setting addon volumesnapshots=true in "addons-937561"
	I1129 09:16:28.641402  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.641903  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591017  302940 addons.go:70] Setting default-storageclass=true in profile "addons-937561"
	I1129 09:16:28.648719  302940 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-937561"
	I1129 09:16:28.649147  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591029  302940 addons.go:70] Setting ingress=true in profile "addons-937561"
	I1129 09:16:28.678635  302940 addons.go:239] Setting addon ingress=true in "addons-937561"
	I1129 09:16:28.678687  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.679283  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591024  302940 addons.go:70] Setting gcp-auth=true in profile "addons-937561"
	I1129 09:16:28.686362  302940 mustload.go:66] Loading cluster: addons-937561
	I1129 09:16:28.686576  302940 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:28.686828  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591047  302940 addons.go:70] Setting ingress-dns=true in profile "addons-937561"
	I1129 09:16:28.713043  302940 addons.go:239] Setting addon ingress-dns=true in "addons-937561"
	I1129 09:16:28.713092  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.713583  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.591075  302940 addons.go:70] Setting inspektor-gadget=true in profile "addons-937561"
	I1129 09:16:28.718658  302940 addons.go:239] Setting addon inspektor-gadget=true in "addons-937561"
	I1129 09:16:28.718701  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.719188  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.749860  302940 out.go:179] * Verifying Kubernetes components...
	I1129 09:16:28.753985  302940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:28.758607  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.782622  302940 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1129 09:16:28.802747  302940 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1129 09:16:28.808534  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1129 09:16:28.808603  302940 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1129 09:16:28.808706  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.816781  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1129 09:16:28.858545  302940 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:16:28.862351  302940 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1129 09:16:28.862989  302940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:16:28.863027  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:16:28.863130  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.865853  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 09:16:28.865878  302940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 09:16:28.866047  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.885585  302940 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1129 09:16:28.886381  302940 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1129 09:16:28.888794  302940 out.go:179]   - Using image docker.io/registry:3.0.0
	I1129 09:16:28.888961  302940 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 09:16:28.889003  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1129 09:16:28.889093  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.894750  302940 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1129 09:16:28.894774  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1129 09:16:28.894842  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.907397  302940 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 09:16:28.907417  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1129 09:16:28.907480  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.945194  302940 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-937561"
	I1129 09:16:28.945239  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.945762  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.950027  302940 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1129 09:16:28.951239  302940 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1129 09:16:28.962583  302940 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1129 09:16:28.968065  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.970944  302940 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1129 09:16:28.970964  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1129 09:16:28.971058  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:28.971513  302940 addons.go:239] Setting addon default-storageclass=true in "addons-937561"
	I1129 09:16:28.971568  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:28.972006  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:28.967590  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1129 09:16:28.967620  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 09:16:29.006270  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1129 09:16:28.967782  302940 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1129 09:16:29.014378  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1129 09:16:29.014453  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.014235  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1129 09:16:29.036648  302940 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1129 09:16:29.036852  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1129 09:16:29.036876  302940 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1129 09:16:29.036955  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.065759  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1129 09:16:29.069810  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1129 09:16:29.072749  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1129 09:16:29.076550  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1129 09:16:29.084001  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.088522  302940 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1129 09:16:29.089644  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 09:16:29.091519  302940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1129 09:16:29.089904  302940 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 09:16:29.091584  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1129 09:16:29.091659  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.089951  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.092658  302940 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 09:16:29.092692  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1129 09:16:29.092744  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.114979  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1129 09:16:29.115001  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1129 09:16:29.115061  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.136437  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.138052  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1129 09:16:29.142387  302940 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 09:16:29.142412  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1129 09:16:29.142478  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.162159  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.165580  302940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:16:29.178386  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.208571  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.208570  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.209474  302940 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:16:29.209492  302940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:16:29.209558  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.217311  302940 out.go:179]   - Using image docker.io/busybox:stable
	I1129 09:16:29.222247  302940 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1129 09:16:29.230278  302940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 09:16:29.230305  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1129 09:16:29.230372  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:29.255998  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.274881  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.276256  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.284435  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.294544  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.309382  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.324533  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.325160  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:29.339306  302940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:29.797362  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 09:16:29.797442  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1129 09:16:29.845210  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1129 09:16:29.845230  302940 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1129 09:16:29.884404  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1129 09:16:29.884427  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1129 09:16:29.915393  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 09:16:29.919884  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:16:29.920193  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 09:16:29.923306  302940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1129 09:16:29.923328  302940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1129 09:16:29.927365  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 09:16:29.931563  302940 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1129 09:16:29.931582  302940 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1129 09:16:29.937898  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 09:16:29.942610  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1129 09:16:29.945063  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:16:29.948410  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 09:16:29.948485  302940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 09:16:30.007495  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1129 09:16:30.007577  302940 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1129 09:16:30.039541  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 09:16:30.064648  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1129 09:16:30.074680  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1129 09:16:30.074763  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1129 09:16:30.145571  302940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:16:30.145654  302940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 09:16:30.148443  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 09:16:30.160520  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1129 09:16:30.160608  302940 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1129 09:16:30.163744  302940 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1129 09:16:30.163819  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1129 09:16:30.167411  302940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1129 09:16:30.167485  302940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1129 09:16:30.272967  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1129 09:16:30.273045  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1129 09:16:30.351584  302940 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1129 09:16:30.351654  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1129 09:16:30.363833  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1129 09:16:30.375601  302940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1129 09:16:30.375682  302940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1129 09:16:30.388383  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:16:30.468308  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1129 09:16:30.468387  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1129 09:16:30.537016  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1129 09:16:30.542395  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1129 09:16:30.542472  302940 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1129 09:16:30.649385  302940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.483769722s)
	I1129 09:16:30.649481  302940 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.310150726s)
	I1129 09:16:30.650247  302940 node_ready.go:35] waiting up to 6m0s for node "addons-937561" to be "Ready" ...
	I1129 09:16:30.650446  302940 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1129 09:16:30.660657  302940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1129 09:16:30.660739  302940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1129 09:16:30.773083  302940 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 09:16:30.773156  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1129 09:16:31.102970  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 09:16:31.113906  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1129 09:16:31.113928  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1129 09:16:31.158437  302940 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-937561" context rescaled to 1 replicas
	I1129 09:16:31.286283  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1129 09:16:31.286355  302940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1129 09:16:31.452516  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1129 09:16:31.452589  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1129 09:16:31.719853  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1129 09:16:31.719925  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1129 09:16:31.880059  302940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 09:16:31.880139  302940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1129 09:16:32.088806  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1129 09:16:32.661042  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:34.653192  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.73771722s)
	I1129 09:16:34.653225  302940 addons.go:495] Verifying addon ingress=true in "addons-937561"
	I1129 09:16:34.653429  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.73318372s)
	I1129 09:16:34.653471  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.73352089s)
	I1129 09:16:34.653660  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.726275789s)
	I1129 09:16:34.653708  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.715793567s)
	I1129 09:16:34.653771  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.711094002s)
	I1129 09:16:34.653817  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.708692871s)
	I1129 09:16:34.653852  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.614233282s)
	I1129 09:16:34.653879  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.589154832s)
	I1129 09:16:34.653923  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.505411052s)
	I1129 09:16:34.654041  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.290135541s)
	I1129 09:16:34.654059  302940 addons.go:495] Verifying addon registry=true in "addons-937561"
	I1129 09:16:34.654404  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.26593204s)
	I1129 09:16:34.654428  302940 addons.go:495] Verifying addon metrics-server=true in "addons-937561"
	I1129 09:16:34.654469  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117379235s)
	I1129 09:16:34.654721  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.551721969s)
	W1129 09:16:34.655824  302940 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 09:16:34.655854  302940 retry.go:31] will retry after 147.983939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 09:16:34.656714  302940 out.go:179] * Verifying ingress addon...
	I1129 09:16:34.656754  302940 out.go:179] * Verifying registry addon...
	I1129 09:16:34.659029  302940 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-937561 service yakd-dashboard -n yakd-dashboard
	
	I1129 09:16:34.661547  302940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1129 09:16:34.662216  302940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1129 09:16:34.663115  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:34.688336  302940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1129 09:16:34.688358  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:34.688450  302940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 09:16:34.688472  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1129 09:16:34.690861  302940 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1129 09:16:34.804549  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 09:16:35.029840  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.940982209s)
	I1129 09:16:35.029931  302940 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-937561"
	I1129 09:16:35.033106  302940 out.go:179] * Verifying csi-hostpath-driver addon...
	I1129 09:16:35.036956  302940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1129 09:16:35.047733  302940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 09:16:35.047806  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:35.178288  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:35.179089  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:35.541040  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:35.666747  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:35.667991  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:36.043117  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:36.166931  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:36.167299  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:36.540898  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:36.578175  302940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1129 09:16:36.578284  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:36.594754  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:36.666755  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:36.666822  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:36.707380  302940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1129 09:16:36.720694  302940 addons.go:239] Setting addon gcp-auth=true in "addons-937561"
	I1129 09:16:36.720745  302940 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:16:36.721210  302940 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:16:36.739615  302940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1129 09:16:36.739691  302940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:16:36.756529  302940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:16:37.040779  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1129 09:16:37.153684  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:37.166126  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:37.166502  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:37.535652  302940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.730993793s)
	I1129 09:16:37.538573  302940 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1129 09:16:37.541111  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:37.544044  302940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 09:16:37.546996  302940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1129 09:16:37.547018  302940 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1129 09:16:37.562933  302940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1129 09:16:37.563000  302940 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1129 09:16:37.578571  302940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 09:16:37.578598  302940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1129 09:16:37.591437  302940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 09:16:37.668398  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:37.668999  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:38.052330  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:38.083616  302940 addons.go:495] Verifying addon gcp-auth=true in "addons-937561"
	I1129 09:16:38.087256  302940 out.go:179] * Verifying gcp-auth addon...
	I1129 09:16:38.091913  302940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1129 09:16:38.097222  302940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1129 09:16:38.097249  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:38.167178  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:38.167497  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:38.540914  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:38.594757  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:38.666327  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:38.666450  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:39.040760  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:39.095559  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:39.167038  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:39.167248  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:39.540166  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:39.595077  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:39.653639  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:39.665544  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:39.665949  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:40.047336  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:40.095780  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:40.167146  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:40.169620  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:40.540916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:40.595508  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:40.665468  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:40.665513  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:41.040436  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:41.095451  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:41.165823  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:41.166096  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:41.541185  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:41.595054  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:41.654104  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:41.666546  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:41.666764  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:42.040231  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:42.095872  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:42.167695  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:42.168317  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:42.540347  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:42.595889  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:42.665839  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:42.665978  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:43.039866  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:43.095548  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:43.166428  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:43.166607  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:43.541008  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:43.596014  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:43.665876  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:43.665940  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:44.039798  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:44.095547  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:44.153249  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:44.166569  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:44.166724  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:44.539876  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:44.595401  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:44.666000  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:44.666099  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:45.043942  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:45.099413  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:45.169733  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:45.181094  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:45.542168  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:45.595089  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:45.665415  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:45.666003  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:46.040449  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:46.095411  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:46.166484  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:46.166603  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:46.540931  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:46.595678  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:46.653472  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:46.666153  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:46.666318  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:47.040660  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:47.095290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:47.165316  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:47.165900  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:47.540351  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:47.594976  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:47.665898  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:47.666494  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:48.040852  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:48.095545  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:48.165851  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:48.165900  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:48.539944  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:48.594851  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:48.653756  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:48.665801  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:48.665917  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:49.039771  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:49.095428  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:49.165766  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:49.166681  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:49.541027  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:49.594833  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:49.665596  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:49.665692  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:50.040296  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:50.095441  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:50.166802  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:50.167308  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:50.540359  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:50.595220  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:50.653855  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:50.665936  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:50.666001  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:51.040170  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:51.095173  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:51.166762  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:51.166889  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:51.540436  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:51.595310  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:51.665963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:51.666367  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:52.040929  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:52.095086  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:52.166105  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:52.166690  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:52.539832  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:52.595997  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:52.654110  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:52.666453  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:52.666574  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:53.041662  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:53.095957  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:53.166242  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:53.166539  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:53.541045  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:53.594863  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:53.666111  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:53.666429  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:54.040576  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:54.095734  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:54.167033  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:54.167146  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:54.540423  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:54.595346  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:54.666023  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:54.666399  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:55.040688  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:55.095614  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:55.153541  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:55.166530  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:55.166654  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:55.541005  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:55.595680  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:55.665735  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:55.665808  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:56.040650  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:56.094870  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:56.165929  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:56.166679  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:56.540646  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:56.595573  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:56.665593  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:56.665811  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:57.039884  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:57.094755  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:57.153811  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:57.165990  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:57.166175  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:57.540998  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:57.596558  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:57.666345  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:57.666505  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:58.041020  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:58.095692  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:58.166514  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:58.167149  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:58.540457  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:58.595874  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:58.665656  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:58.666376  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:59.040940  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:59.095530  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:16:59.166884  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:16:59.167624  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:59.539798  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:16:59.596133  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:16:59.653647  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:16:59.665750  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:16:59.665994  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:00.051067  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:00.098382  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:00.184333  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:00.192703  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:00.540524  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:00.596286  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:00.666188  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:00.666555  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:01.040963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:01.095248  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:01.166900  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:01.167339  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:01.541205  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:01.595402  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:01.654011  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:01.666387  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:01.666517  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:02.041041  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:02.095223  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:02.166689  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:02.166761  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:02.539901  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:02.595142  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:02.666060  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:02.666289  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:03.040637  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:03.095613  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:03.166380  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:03.166420  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:03.540993  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:03.595224  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:03.666019  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:03.666111  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:04.040467  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:04.095808  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:04.153973  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:04.166138  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:04.166472  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:04.540932  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:04.595253  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:04.665727  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:04.666164  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:05.040163  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:05.095449  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:05.165849  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:05.166325  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:05.540716  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:05.595777  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:05.666457  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:05.666544  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:06.040905  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:06.094874  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:06.165715  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:06.166195  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:06.540575  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:06.595524  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:06.653361  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:06.665934  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:06.665970  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:07.041044  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:07.095192  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:07.165478  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:07.165746  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:07.540063  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:07.595107  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:07.665729  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:07.666155  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:08.040509  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:08.095686  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:08.167096  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:08.167279  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:08.540341  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:08.595331  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:08.666125  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:08.666433  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:09.040672  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:09.095431  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1129 09:17:09.153357  302940 node_ready.go:57] node "addons-937561" has "Ready":"False" status (will retry)
	I1129 09:17:09.165838  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:09.165900  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:09.551138  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:09.666499  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:09.683956  302940 node_ready.go:49] node "addons-937561" is "Ready"
	I1129 09:17:09.683983  302940 node_ready.go:38] duration metric: took 39.033719015s for node "addons-937561" to be "Ready" ...
	I1129 09:17:09.683996  302940 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:17:09.684051  302940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:17:09.708433  302940 api_server.go:72] duration metric: took 41.126536472s to wait for apiserver process to appear ...
	I1129 09:17:09.708454  302940 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:17:09.708473  302940 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1129 09:17:09.710355  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:09.710915  302940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 09:17:09.710954  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:09.718827  302940 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1129 09:17:09.722626  302940 api_server.go:141] control plane version: v1.34.1
	I1129 09:17:09.722704  302940 api_server.go:131] duration metric: took 14.243569ms to wait for apiserver health ...
	I1129 09:17:09.722729  302940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:17:09.736534  302940 system_pods.go:59] 19 kube-system pods found
	I1129 09:17:09.736617  302940 system_pods.go:61] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending
	I1129 09:17:09.736638  302940 system_pods.go:61] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending
	I1129 09:17:09.736659  302940 system_pods.go:61] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending
	I1129 09:17:09.736699  302940 system_pods.go:61] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending
	I1129 09:17:09.736717  302940 system_pods.go:61] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:09.736736  302940 system_pods.go:61] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:09.736769  302940 system_pods.go:61] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:09.736793  302940 system_pods.go:61] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:09.736813  302940 system_pods.go:61] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending
	I1129 09:17:09.736846  302940 system_pods.go:61] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:09.736871  302940 system_pods.go:61] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:09.736889  302940 system_pods.go:61] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending
	I1129 09:17:09.736909  302940 system_pods.go:61] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending
	I1129 09:17:09.736940  302940 system_pods.go:61] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending
	I1129 09:17:09.736966  302940 system_pods.go:61] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending
	I1129 09:17:09.736984  302940 system_pods.go:61] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending
	I1129 09:17:09.737016  302940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending
	I1129 09:17:09.737044  302940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:09.737065  302940 system_pods.go:61] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending
	I1129 09:17:09.737102  302940 system_pods.go:74] duration metric: took 14.353224ms to wait for pod list to return data ...
	I1129 09:17:09.737129  302940 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:17:09.741595  302940 default_sa.go:45] found service account: "default"
	I1129 09:17:09.741669  302940 default_sa.go:55] duration metric: took 4.520643ms for default service account to be created ...
	I1129 09:17:09.741693  302940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:17:09.754312  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:09.754397  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending
	I1129 09:17:09.754422  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:09.754460  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending
	I1129 09:17:09.754484  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending
	I1129 09:17:09.754501  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:09.754521  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:09.754555  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:09.754581  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:09.754601  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending
	I1129 09:17:09.754635  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:09.754661  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:09.754681  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending
	I1129 09:17:09.754716  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending
	I1129 09:17:09.754743  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending
	I1129 09:17:09.754781  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending
	I1129 09:17:09.754803  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending
	I1129 09:17:09.754821  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending
	I1129 09:17:09.754843  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:09.754879  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending
	I1129 09:17:09.754907  302940 retry.go:31] will retry after 310.106848ms: missing components: kube-dns
	I1129 09:17:10.042294  302940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 09:17:10.042422  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:10.075620  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:10.075707  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:10.075733  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:10.075772  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:10.075798  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending
	I1129 09:17:10.075819  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:10.075858  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:10.075883  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:10.075904  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:10.075948  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:10.075977  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:10.076000  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:10.076035  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending
	I1129 09:17:10.076059  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending
	I1129 09:17:10.076080  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending
	I1129 09:17:10.076120  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:10.076149  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending
	I1129 09:17:10.076174  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending
	I1129 09:17:10.076224  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.076244  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending
	I1129 09:17:10.076293  302940 retry.go:31] will retry after 374.335809ms: missing components: kube-dns
	I1129 09:17:10.097499  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:10.168019  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:10.168284  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:10.458232  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:10.458321  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:10.458344  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:10.458385  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:10.458411  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:10.458429  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:10.458466  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:10.458489  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:10.458509  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:10.458547  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:10.458570  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:10.458589  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:10.458626  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:10.458653  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:10.458675  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:10.458711  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:10.458737  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:10.458762  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.458797  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.458824  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:17:10.458869  302940 retry.go:31] will retry after 364.995744ms: missing components: kube-dns
	I1129 09:17:10.541019  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:10.595403  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:10.666287  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:10.667253  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:10.828829  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:10.828929  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:10.828962  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:10.828987  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:10.829023  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:10.829051  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:10.829075  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:10.829108  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:10.829136  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:10.829163  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:10.829195  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:10.829224  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:10.829250  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:10.829291  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:10.829314  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:10.829343  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:10.829381  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:10.829403  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.829428  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:10.829460  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:10.829501  302940 retry.go:31] will retry after 487.701256ms: missing components: kube-dns
	I1129 09:17:11.041264  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:11.096419  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:11.168437  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:11.168873  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:11.324780  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:11.324876  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:11.324906  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:11.324928  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:11.324968  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:11.324990  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:11.325013  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:11.325046  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:11.325066  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:11.325094  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:11.325126  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:11.325156  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:11.325187  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:11.325218  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:11.325249  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:11.325288  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:11.325312  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:11.325359  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.325382  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.325404  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:11.325450  302940 retry.go:31] will retry after 624.811464ms: missing components: kube-dns
	I1129 09:17:11.540517  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:11.595583  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:11.666558  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:11.667665  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:11.955068  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:11.955107  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:11.955117  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:11.955125  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:11.955132  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:11.955136  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:11.955141  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:11.955145  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:11.955156  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:11.955163  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:11.955170  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:11.955175  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:11.955181  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:11.955188  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:11.955199  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:11.955205  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:11.955212  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:11.955221  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.955229  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:11.955233  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:11.955248  302940 retry.go:31] will retry after 628.756685ms: missing components: kube-dns
	I1129 09:17:12.040654  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:12.097654  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:12.197354  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:12.197499  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:12.540912  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:12.589178  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:12.589219  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:12.589231  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:12.589249  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:12.589260  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:12.589265  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:12.589278  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:12.589283  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:12.589288  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:12.589300  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:12.589304  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:12.589309  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:12.589323  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:12.589336  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:12.589342  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:12.589348  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:12.589355  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:12.589367  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:12.589375  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:12.589379  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:12.589406  302940 retry.go:31] will retry after 753.534635ms: missing components: kube-dns
	I1129 09:17:12.595055  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:12.666667  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:12.666765  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:13.040572  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:13.095146  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:13.167327  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:13.167438  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:13.347795  302940 system_pods.go:86] 19 kube-system pods found
	I1129 09:17:13.347831  302940 system_pods.go:89] "coredns-66bc5c9577-dwkbv" [ded6f8aa-da01-4321-9eac-d76914054363] Running
	I1129 09:17:13.347843  302940 system_pods.go:89] "csi-hostpath-attacher-0" [22f801bf-937d-4492-bba1-06d25ece248a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 09:17:13.347851  302940 system_pods.go:89] "csi-hostpath-resizer-0" [f3e25c58-7eb8-45f7-a724-18c03ee057a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 09:17:13.347860  302940 system_pods.go:89] "csi-hostpathplugin-w96sq" [3a29f3c1-1a84-44f9-bc0c-0ebe02070706] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 09:17:13.347864  302940 system_pods.go:89] "etcd-addons-937561" [1bbf3f50-4a1f-464c-9926-d8f79bd145d6] Running
	I1129 09:17:13.347869  302940 system_pods.go:89] "kindnet-wk9nw" [6104b24f-c4f9-4033-a3b5-a40d3743b10e] Running
	I1129 09:17:13.347881  302940 system_pods.go:89] "kube-apiserver-addons-937561" [94588656-a4c9-4bf1-b745-2759106136d9] Running
	I1129 09:17:13.347885  302940 system_pods.go:89] "kube-controller-manager-addons-937561" [36ecd713-8781-4a62-88b2-c37f8c330163] Running
	I1129 09:17:13.347895  302940 system_pods.go:89] "kube-ingress-dns-minikube" [71ea44cc-0bfa-4d90-9e2a-087be00b4c83] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 09:17:13.347899  302940 system_pods.go:89] "kube-proxy-79sbl" [d337167e-2b4f-41e2-8c87-3476f5332eee] Running
	I1129 09:17:13.347906  302940 system_pods.go:89] "kube-scheduler-addons-937561" [d4647bee-3fa5-44d0-bc28-dd003c285480] Running
	I1129 09:17:13.347914  302940 system_pods.go:89] "metrics-server-85b7d694d7-jfpt2" [5c3e0736-60d0-4f32-9659-9adb47db823a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 09:17:13.347924  302940 system_pods.go:89] "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 09:17:13.347931  302940 system_pods.go:89] "registry-6b586f9694-9wb6d" [341eb75f-fb9b-48c9-9c27-9e7d56a4a21f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 09:17:13.347937  302940 system_pods.go:89] "registry-creds-764b6fb674-8q8xm" [4407cb92-93a5-4523-b1da-d85a945d9fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 09:17:13.347945  302940 system_pods.go:89] "registry-proxy-5t68c" [6c0ca574-79df-44ac-b741-9efd1c97f277] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 09:17:13.347951  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9qqmq" [c884894e-821d-4228-8df2-8456e35bc816] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:13.347958  302940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nbcng" [d05ba90e-dac2-489f-835b-c23ec3ca0a14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 09:17:13.347963  302940 system_pods.go:89] "storage-provisioner" [8dc43ab8-43c3-48ca-b045-4d5882df5a99] Running
	I1129 09:17:13.347973  302940 system_pods.go:126] duration metric: took 3.606260532s to wait for k8s-apps to be running ...
	I1129 09:17:13.347984  302940 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:17:13.348041  302940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:13.363923  302940 system_svc.go:56] duration metric: took 15.919489ms WaitForService to wait for kubelet
	I1129 09:17:13.363992  302940 kubeadm.go:587] duration metric: took 44.782100402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:13.364049  302940 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:17:13.367221  302940 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:17:13.367294  302940 node_conditions.go:123] node cpu capacity is 2
	I1129 09:17:13.367322  302940 node_conditions.go:105] duration metric: took 3.25505ms to run NodePressure ...
	I1129 09:17:13.367347  302940 start.go:242] waiting for startup goroutines ...
	I1129 09:17:13.541247  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:13.595559  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:13.667044  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:13.667833  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:14.040826  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:14.095794  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:14.175051  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:14.175583  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:14.541409  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:14.595577  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:14.667201  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:14.667500  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:15.041894  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:15.095242  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:15.167145  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:15.167578  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:15.541874  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:15.595388  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:15.666091  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:15.667571  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:16.041680  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:16.094929  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:16.166833  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:16.167747  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:16.541930  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:16.596013  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:16.667708  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:16.668094  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:17.040743  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:17.095989  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:17.166178  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:17.167644  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:17.540613  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:17.595520  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:17.665734  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:17.665918  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:18.041754  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:18.095791  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:18.166884  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:18.167263  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:18.541536  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:18.595547  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:18.670136  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:18.670328  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:19.043220  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:19.143752  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:19.167117  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:19.167269  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:19.540713  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:19.595704  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:19.665881  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:19.666237  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:20.040971  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:20.095083  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:20.166239  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:20.166925  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:20.540846  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:20.595177  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:20.666950  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:20.667251  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:21.041805  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:21.095460  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:21.166520  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:21.167883  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:21.542444  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:21.595479  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:21.667264  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:21.667611  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:22.041497  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:22.095684  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:22.168314  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:22.168395  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:22.541916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:22.595413  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:22.667540  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:22.667815  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:23.040983  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:23.095052  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:23.167770  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:23.167962  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:23.541718  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:23.596064  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:23.667174  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:23.667753  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:24.042043  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:24.095436  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:24.167434  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:24.167669  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:24.540746  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:24.595871  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:24.666127  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:24.666210  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:25.041186  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:25.095369  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:25.166927  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:25.167413  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:25.542889  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:25.595225  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:25.668854  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:25.668423  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:26.040706  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:26.095022  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:26.167532  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:26.167803  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:26.541422  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:26.595502  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:26.666409  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:26.666617  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:27.041692  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:27.095773  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:27.167623  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:27.168047  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:27.540791  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:27.596156  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:27.668471  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:27.668930  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:28.040785  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:28.096284  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:28.167459  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:28.167998  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:28.540670  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:28.595249  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:28.665213  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:28.665774  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:29.041991  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:29.095290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:29.166709  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:29.166956  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:29.540699  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:29.595745  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:29.666320  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:29.666793  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:30.049373  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:30.103192  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:30.166134  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:30.166288  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:30.540548  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:30.595489  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:30.667152  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:30.667290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:31.041525  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:31.096205  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:31.167212  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:31.167622  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:31.541386  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:31.596186  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:31.666290  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:31.667058  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:32.040893  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:32.095151  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:32.165992  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:32.166824  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:32.541789  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:32.595662  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:32.666732  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:32.667846  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:33.040817  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:33.095635  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:33.166653  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:33.167400  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:33.541397  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:33.595230  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:33.667062  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:33.667410  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:34.042729  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:34.095830  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:34.166875  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:34.167152  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:34.540508  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:34.595517  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:34.666885  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:34.667150  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:35.040647  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:35.095821  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:35.166951  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:35.167472  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:35.541511  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:35.595505  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:35.666243  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:35.666357  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:36.040680  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:36.094698  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:36.166805  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:36.167528  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:36.544218  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:36.595383  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:36.666033  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:36.666762  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:37.040768  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:37.095879  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:37.167367  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:37.167544  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:37.541006  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:37.595268  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:37.666109  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:37.666379  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:38.041305  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:38.095813  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:38.168305  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:38.169025  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:38.541729  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:38.641565  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:38.670256  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:38.670713  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:39.041915  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:39.095320  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:39.166835  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:39.167351  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:39.548587  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:39.596131  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:39.666817  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:39.667463  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:40.041818  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:40.095229  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:40.171512  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:40.171724  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:40.541209  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:40.642183  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:40.665476  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:40.665696  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:41.045730  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:41.095609  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:41.167482  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:41.167819  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:41.541239  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:41.616012  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:41.668646  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:41.668868  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:42.041329  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:42.150227  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:42.166812  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:42.167070  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:42.541524  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:42.641038  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:42.667356  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:42.667523  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:43.041458  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:43.095246  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:43.166232  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:43.166289  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:43.541526  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:43.595919  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:43.668040  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:43.668494  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:44.042504  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:44.095848  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:44.168606  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:44.178134  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:44.540995  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:44.595853  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:44.667635  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:44.668039  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:45.065709  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:45.096961  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:45.168963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:45.170529  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:45.541241  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:45.595654  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:45.666881  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:45.667072  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:46.040959  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:46.095944  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:46.168503  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:46.169244  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:46.541321  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:46.595208  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:46.668580  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:46.669386  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:47.042063  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:47.095817  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:47.167354  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:47.167832  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:47.540311  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:47.595243  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:47.665717  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 09:17:47.666210  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:48.040731  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:48.095620  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:48.165957  302940 kapi.go:107] duration metric: took 1m13.503738263s to wait for kubernetes.io/minikube-addons=registry ...
	I1129 09:17:48.166407  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:48.541600  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:48.595916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:48.668167  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:49.040867  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:49.094903  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:49.176054  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:49.540955  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:49.595369  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:49.673279  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:50.041265  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:50.095877  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:50.165824  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:50.540941  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:50.594698  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:50.665785  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:51.040348  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:51.095279  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:51.165373  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:51.541754  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:51.596068  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:51.666328  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:52.051370  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:52.151355  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:52.165386  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:52.540452  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:52.595033  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:52.666228  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:53.041363  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:53.097032  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:53.168158  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:53.542332  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:53.641943  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 09:17:53.667262  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:54.041029  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:54.095446  302940 kapi.go:107] duration metric: took 1m16.003518182s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1129 09:17:54.098799  302940 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-937561 cluster.
	I1129 09:17:54.101879  302940 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1129 09:17:54.104912  302940 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1129 09:17:54.165722  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:54.541264  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:54.666120  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:55.041467  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:55.165441  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:55.540811  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:55.665732  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:56.040343  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:56.166764  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:56.540740  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:56.665962  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:57.041398  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:57.165704  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:57.540963  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:57.665799  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:58.041071  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:58.166895  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:58.539916  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:58.665765  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:59.040921  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:59.178304  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:17:59.547504  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:17:59.667333  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:00.052586  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:00.235670  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:00.543490  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:00.665523  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:01.041494  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:01.165913  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:01.540778  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:01.667541  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:02.041177  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:02.167907  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:02.540722  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:02.665638  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:03.040403  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:03.165941  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:03.551549  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:03.665951  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:04.040210  302940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 09:18:04.166262  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:04.541019  302940 kapi.go:107] duration metric: took 1m29.504062995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1129 09:18:04.666158  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:05.166332  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:05.666433  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:06.165970  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:06.666436  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:07.165936  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:07.665168  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:08.167469  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:08.666373  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:09.166182  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:09.665904  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:10.165399  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:10.666273  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:11.166170  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:11.665583  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:12.166040  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:12.665485  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:13.166784  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:13.665777  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:14.166641  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:14.666556  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:15.168142  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:15.666524  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:16.166306  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:16.665567  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:17.166671  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:17.666252  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:18.165859  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:18.666415  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:19.166782  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:19.668657  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:20.166319  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:20.665831  302940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 09:18:21.168818  302940 kapi.go:107] duration metric: took 1m46.507273702s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1129 09:18:21.171943  302940 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, storage-provisioner, registry-creds, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1129 09:18:21.174959  302940 addons.go:530] duration metric: took 1m52.592645975s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin ingress-dns inspektor-gadget storage-provisioner registry-creds cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1129 09:18:21.175021  302940 start.go:247] waiting for cluster config update ...
	I1129 09:18:21.175046  302940 start.go:256] writing updated cluster config ...
	I1129 09:18:21.175334  302940 ssh_runner.go:195] Run: rm -f paused
	I1129 09:18:21.180291  302940 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:18:21.184418  302940 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dwkbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.189696  302940 pod_ready.go:94] pod "coredns-66bc5c9577-dwkbv" is "Ready"
	I1129 09:18:21.189731  302940 pod_ready.go:86] duration metric: took 5.280941ms for pod "coredns-66bc5c9577-dwkbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.196418  302940 pod_ready.go:83] waiting for pod "etcd-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.204765  302940 pod_ready.go:94] pod "etcd-addons-937561" is "Ready"
	I1129 09:18:21.204792  302940 pod_ready.go:86] duration metric: took 8.347074ms for pod "etcd-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.269630  302940 pod_ready.go:83] waiting for pod "kube-apiserver-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.274291  302940 pod_ready.go:94] pod "kube-apiserver-addons-937561" is "Ready"
	I1129 09:18:21.274321  302940 pod_ready.go:86] duration metric: took 4.664669ms for pod "kube-apiserver-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.276613  302940 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.584445  302940 pod_ready.go:94] pod "kube-controller-manager-addons-937561" is "Ready"
	I1129 09:18:21.584477  302940 pod_ready.go:86] duration metric: took 307.839749ms for pod "kube-controller-manager-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:21.785225  302940 pod_ready.go:83] waiting for pod "kube-proxy-79sbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.184722  302940 pod_ready.go:94] pod "kube-proxy-79sbl" is "Ready"
	I1129 09:18:22.184752  302940 pod_ready.go:86] duration metric: took 399.497579ms for pod "kube-proxy-79sbl" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.396095  302940 pod_ready.go:83] waiting for pod "kube-scheduler-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.784250  302940 pod_ready.go:94] pod "kube-scheduler-addons-937561" is "Ready"
	I1129 09:18:22.784322  302940 pod_ready.go:86] duration metric: took 388.195347ms for pod "kube-scheduler-addons-937561" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:18:22.784342  302940 pod_ready.go:40] duration metric: took 1.604006225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:18:22.844151  302940 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:18:22.848957  302940 out.go:179] * Done! kubectl is now configured to use "addons-937561" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:18:22 addons-937561 crio[828]: time="2025-11-29T09:18:22.436589589Z" level=info msg="Stopped pod sandbox (already stopped): 44c4f37a76b4e1746ab08a67507f7e450c752311f53b7cce11463d91906b0ac8" id=6a77fadc-e3a2-44cf-a8a6-3b3b0a4e6aa7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 09:18:22 addons-937561 crio[828]: time="2025-11-29T09:18:22.437047278Z" level=info msg="Removing pod sandbox: 44c4f37a76b4e1746ab08a67507f7e450c752311f53b7cce11463d91906b0ac8" id=1f3f72c3-15c3-4993-b93c-818e20da90f6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:18:22 addons-937561 crio[828]: time="2025-11-29T09:18:22.442821632Z" level=info msg="Removed pod sandbox: 44c4f37a76b4e1746ab08a67507f7e450c752311f53b7cce11463d91906b0ac8" id=1f3f72c3-15c3-4993-b93c-818e20da90f6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.225135011Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e2baac61-202c-46da-ba4e-c6306085a255 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.225203295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.236829108Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3af909eca006535d92d6907e4eb312bea2747dc3a06c5123250313643e61076b UID:d7157c2f-990a-4dba-877d-2f1f6dc08159 NetNS:/var/run/netns/31ebc806-8d5e-41eb-ba26-b0f4a9fcc308 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ded0}] Aliases:map[]}"
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.23687308Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.246449758Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3af909eca006535d92d6907e4eb312bea2747dc3a06c5123250313643e61076b UID:d7157c2f-990a-4dba-877d-2f1f6dc08159 NetNS:/var/run/netns/31ebc806-8d5e-41eb-ba26-b0f4a9fcc308 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ded0}] Aliases:map[]}"
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.246688933Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.251303895Z" level=info msg="Ran pod sandbox 3af909eca006535d92d6907e4eb312bea2747dc3a06c5123250313643e61076b with infra container: default/busybox/POD" id=e2baac61-202c-46da-ba4e-c6306085a255 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.25306437Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dda0f73c-f238-43cd-9d19-6cb1010dd648 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.253292935Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dda0f73c-f238-43cd-9d19-6cb1010dd648 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.253399882Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dda0f73c-f238-43cd-9d19-6cb1010dd648 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.255342579Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0eaa29c7-07fc-4ac3-af3b-c55cb5cb92e8 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:18:24 addons-937561 crio[828]: time="2025-11-29T09:18:24.257445122Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.361973559Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0eaa29c7-07fc-4ac3-af3b-c55cb5cb92e8 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.362572993Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=353eb240-57cd-4970-ab1a-0aec2b52b5d3 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.36436834Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=16a29fa9-9c80-4dbb-88ce-36113d999089 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.37053196Z" level=info msg="Creating container: default/busybox/busybox" id=80634d50-c555-4e47-ad6e-af6b71177b0f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.370663391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.377109024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.377597031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.394550219Z" level=info msg="Created container 6f60b262aa11c6146f61b6640257cf940ff92ee1fdb6a65f9d69d7660c8e4d0b: default/busybox/busybox" id=80634d50-c555-4e47-ad6e-af6b71177b0f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.395501165Z" level=info msg="Starting container: 6f60b262aa11c6146f61b6640257cf940ff92ee1fdb6a65f9d69d7660c8e4d0b" id=6f4a85e6-60d0-4127-b6c7-45106dbb33bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:18:26 addons-937561 crio[828]: time="2025-11-29T09:18:26.399440713Z" level=info msg="Started container" PID=5004 containerID=6f60b262aa11c6146f61b6640257cf940ff92ee1fdb6a65f9d69d7660c8e4d0b description=default/busybox/busybox id=6f4a85e6-60d0-4127-b6c7-45106dbb33bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=3af909eca006535d92d6907e4eb312bea2747dc3a06c5123250313643e61076b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	6f60b262aa11c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   3af909eca0065       busybox                                    default
	1e4aa857cf0de       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             14 seconds ago       Running             controller                               0                   026780566ef70       ingress-nginx-controller-6c8bf45fb-8gjmc   ingress-nginx
	aef9a73b52cb1       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             28 seconds ago       Exited              patch                                    2                   30c18f3e3a89b       ingress-nginx-admission-patch-t6l5q        ingress-nginx
	3c1b8b66c425e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          31 seconds ago       Running             csi-snapshotter                          0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	9aef6b7b60e4c       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          33 seconds ago       Running             csi-provisioner                          0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	f225ca290de28       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            34 seconds ago       Running             liveness-probe                           0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	30eb3a8c8cd59       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           35 seconds ago       Running             hostpath                                 0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	12a5a97ec92c6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            37 seconds ago       Running             gadget                                   0                   67b94c2a133ba       gadget-dz9wn                               gadget
	e6e40e77afa28       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                40 seconds ago       Running             node-driver-registrar                    0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	d49a00843822c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 42 seconds ago       Running             gcp-auth                                 0                   79a1b326999a7       gcp-auth-78565c9fb4-bz6gr                  gcp-auth
	cfb15c1680321       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   45 seconds ago       Exited              create                                   0                   4ed3d5046030e       ingress-nginx-admission-create-cnhs2       ingress-nginx
	506dbad310eb8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             45 seconds ago       Running             local-path-provisioner                   0                   af45e2a4f9373       local-path-provisioner-648f6765c9-v587p    local-path-storage
	11d43a48abd4b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              47 seconds ago       Running             registry-proxy                           0                   856211fae8341       registry-proxy-5t68c                       kube-system
	ffd3ddcf27f55       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        51 seconds ago       Running             metrics-server                           0                   606b0d42d4242       metrics-server-85b7d694d7-jfpt2            kube-system
	af2e25ba59276       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              53 seconds ago       Running             csi-resizer                              0                   a9d5dfc1d1ced       csi-hostpath-resizer-0                     kube-system
	c8fe1df2373bb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      55 seconds ago       Running             volume-snapshot-controller               0                   ce30468d1e40f       snapshot-controller-7d9fbc56b8-9qqmq       kube-system
	b9fd6b139f9a6       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           55 seconds ago       Running             registry                                 0                   d5a70d37926f5       registry-6b586f9694-9wb6d                  kube-system
	66fc5abcc6517       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               57 seconds ago       Running             minikube-ingress-dns                     0                   f0ed73780ecb5       kube-ingress-dns-minikube                  kube-system
	f8cb526e085ff       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   8b8d8d094a623       csi-hostpathplugin-w96sq                   kube-system
	0c0ef85d8b377       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   1dbb61d0109c7       nvidia-device-plugin-daemonset-2kd5l       kube-system
	fdb66785f2ceb       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   5ffb1adfc5fe5       cloud-spanner-emulator-5bdddb765-42lcn     default
	5bc214d6f747a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   bed435b65f844       snapshot-controller-7d9fbc56b8-nbcng       kube-system
	48a8f333ea4b4       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   f4c29b965f050       yakd-dashboard-5ff678cb9-n2vjf             yakd-dashboard
	6159812cd62ca       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   17f6b766c5f7f       csi-hostpath-attacher-0                    kube-system
	cea7127d80def       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   e87c28e4780ae       coredns-66bc5c9577-dwkbv                   kube-system
	d8b057511cccc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   55e74c9f25eba       storage-provisioner                        kube-system
	febc943f90d57       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   075160918c208       kindnet-wk9nw                              kube-system
	8f16da7a481b2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   b86de4cf9d6c7       kube-proxy-79sbl                           kube-system
	1f72b846137bb       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   ce1005629d6f4       kube-apiserver-addons-937561               kube-system
	b28d6a65a1d2e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   1fa1b9c0e6c15       etcd-addons-937561                         kube-system
	465f08cb21ea0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   5a1dd785d7c17       kube-scheduler-addons-937561               kube-system
	c0d24f1fa0e94       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   5bea9984245ce       kube-controller-manager-addons-937561      kube-system
	
	
	==> coredns [cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57] <==
	[INFO] 10.244.0.9:44133 - 23428 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000098184s
	[INFO] 10.244.0.9:44133 - 49821 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001849788s
	[INFO] 10.244.0.9:44133 - 4 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001674779s
	[INFO] 10.244.0.9:44133 - 1748 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000141631s
	[INFO] 10.244.0.9:44133 - 37158 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00013843s
	[INFO] 10.244.0.9:40001 - 4153 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014675s
	[INFO] 10.244.0.9:40001 - 3931 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096371s
	[INFO] 10.244.0.9:55967 - 34097 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090537s
	[INFO] 10.244.0.9:55967 - 33883 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000207592s
	[INFO] 10.244.0.9:49430 - 22791 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00013048s
	[INFO] 10.244.0.9:49430 - 22619 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164852s
	[INFO] 10.244.0.9:59642 - 29718 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00119835s
	[INFO] 10.244.0.9:59642 - 29512 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001325162s
	[INFO] 10.244.0.9:33703 - 64004 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120034s
	[INFO] 10.244.0.9:33703 - 64408 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150788s
	[INFO] 10.244.0.19:45452 - 57458 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199092s
	[INFO] 10.244.0.19:35099 - 40318 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000091308s
	[INFO] 10.244.0.19:56134 - 2462 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196007s
	[INFO] 10.244.0.19:35396 - 48428 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130627s
	[INFO] 10.244.0.19:52016 - 30471 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000159395s
	[INFO] 10.244.0.19:32899 - 6704 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094688s
	[INFO] 10.244.0.19:40342 - 34745 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002393271s
	[INFO] 10.244.0.19:49767 - 60613 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002387601s
	[INFO] 10.244.0.19:43120 - 40168 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002766585s
	[INFO] 10.244.0.19:41761 - 125 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003199082s
	
	
	==> describe nodes <==
	Name:               addons-937561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-937561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=addons-937561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-937561
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-937561"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-937561
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:18:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:18:25 +0000   Sat, 29 Nov 2025 09:16:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:18:25 +0000   Sat, 29 Nov 2025 09:16:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:18:25 +0000   Sat, 29 Nov 2025 09:16:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:18:25 +0000   Sat, 29 Nov 2025 09:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-937561
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                829b03eb-db97-4d35-b80b-ed10fd5f92a5
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-42lcn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  gadget                      gadget-dz9wn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gcp-auth                    gcp-auth-78565c9fb4-bz6gr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-8gjmc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m1s
	  kube-system                 coredns-66bc5c9577-dwkbv                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m7s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 csi-hostpathplugin-w96sq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 etcd-addons-937561                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m14s
	  kube-system                 kindnet-wk9nw                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m7s
	  kube-system                 kube-apiserver-addons-937561                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-addons-937561       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-79sbl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-addons-937561                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 metrics-server-85b7d694d7-jfpt2             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m2s
	  kube-system                 nvidia-device-plugin-daemonset-2kd5l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 registry-6b586f9694-9wb6d                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 registry-creds-764b6fb674-8q8xm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 registry-proxy-5t68c                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 snapshot-controller-7d9fbc56b8-9qqmq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-nbcng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  local-path-storage          local-path-provisioner-648f6765c9-v587p     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-n2vjf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m5s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node addons-937561 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node addons-937561 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node addons-937561 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s                  kubelet          Node addons-937561 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s                  kubelet          Node addons-937561 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s                  kubelet          Node addons-937561 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m8s                   node-controller  Node addons-937561 event: Registered Node addons-937561 in Controller
	  Normal   NodeReady                86s                    kubelet          Node addons-937561 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015149] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507546] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034739] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.833095] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +4.564053] kauditd_printk_skb: 35 callbacks suppressed
	[Nov29 08:31] hrtimer: interrupt took 8840027 ns
	[Nov29 09:14] kauditd_printk_skb: 8 callbacks suppressed
	[Nov29 09:16] overlayfs: idmapped layers are currently not supported
	[  +0.067811] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672] <==
	{"level":"warn","ts":"2025-11-29T09:16:18.123699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.146697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.160233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.183947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.199244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.217458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.231864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.247743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.275478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.289527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.326943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.331001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.344396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.370365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.393504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.430723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.445152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.486660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:18.579567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:35.142832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:35.158630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.323896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.338893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.386366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:57.401800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52732","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d49a00843822cbbf937c36479a92273ed7e64c8b86c3121dd1744c340c7bdb6e] <==
	2025/11/29 09:17:53 GCP Auth Webhook started!
	2025/11/29 09:18:23 Ready to marshal response ...
	2025/11/29 09:18:23 Ready to write response ...
	2025/11/29 09:18:23 Ready to marshal response ...
	2025/11/29 09:18:23 Ready to write response ...
	2025/11/29 09:18:24 Ready to marshal response ...
	2025/11/29 09:18:24 Ready to write response ...
	
	
	==> kernel <==
	 09:18:35 up  2:01,  0 user,  load average: 2.40, 2.81, 3.22
	Linux addons-937561 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9] <==
	E1129 09:16:59.347680       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 09:16:59.347685       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 09:16:59.347788       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 09:16:59.349014       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1129 09:17:00.747417       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:17:00.747513       1 metrics.go:72] Registering metrics
	I1129 09:17:00.747576       1 controller.go:711] "Syncing nftables rules"
	I1129 09:17:09.353385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:17:09.353450       1 main.go:301] handling current node
	I1129 09:17:19.346801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:17:19.346838       1 main.go:301] handling current node
	I1129 09:17:29.346919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:17:29.346952       1 main.go:301] handling current node
	I1129 09:17:39.347742       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:17:39.347919       1 main.go:301] handling current node
	I1129 09:17:49.350056       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:17:49.350248       1 main.go:301] handling current node
	I1129 09:17:59.347701       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:17:59.347729       1 main.go:301] handling current node
	I1129 09:18:09.347262       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:18:09.347302       1 main.go:301] handling current node
	I1129 09:18:19.353386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:18:19.353416       1 main.go:301] handling current node
	I1129 09:18:29.347868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:18:29.348013       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a] <==
	E1129 09:17:09.546713       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.251.250:443: connect: connection refused" logger="UnhandledError"
	W1129 09:17:09.547239       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.251.250:443: connect: connection refused
	E1129 09:17:09.547315       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.251.250:443: connect: connection refused" logger="UnhandledError"
	W1129 09:17:09.630846       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.251.250:443: connect: connection refused
	E1129 09:17:09.630888       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.251.250:443: connect: connection refused" logger="UnhandledError"
	W1129 09:17:34.190657       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 09:17:34.190753       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1129 09:17:34.190764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1129 09:17:34.193920       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 09:17:34.193965       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1129 09:17:34.193978       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1129 09:17:55.987664       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.255.142:443: connect: connection refused" logger="UnhandledError"
	W1129 09:17:55.987850       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 09:17:55.987905       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1129 09:17:55.990497       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.255.142:443: connect: connection refused" logger="UnhandledError"
	E1129 09:17:55.993688       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.255.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.255.142:443: connect: connection refused" logger="UnhandledError"
	I1129 09:17:56.142588       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1129 09:18:33.449554       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53788: use of closed network connection
	E1129 09:18:33.584244       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:53810: use of closed network connection
	
	
	==> kube-controller-manager [c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7] <==
	I1129 09:16:27.337962       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:16:27.354267       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:16:27.354310       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:16:27.354331       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:16:27.354540       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:16:27.355035       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:16:27.355071       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:16:27.355105       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:16:27.355168       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:16:27.355922       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:16:27.356105       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:16:27.356180       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:16:27.358507       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:16:27.359060       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:16:27.359284       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	E1129 09:16:57.316744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1129 09:16:57.316906       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1129 09:16:57.316945       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1129 09:16:57.357569       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1129 09:16:57.362767       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1129 09:16:57.417468       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:16:57.463174       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:12.319339       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1129 09:17:27.422871       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1129 09:17:27.471559       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4] <==
	I1129 09:16:29.405538       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:16:29.493257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:16:29.597904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:16:29.598217       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 09:16:29.598289       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:16:29.655179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:29.655239       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:16:29.659655       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:16:29.659928       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:16:29.659942       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:29.661328       1 config.go:200] "Starting service config controller"
	I1129 09:16:29.661338       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:16:29.661354       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:16:29.661358       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:16:29.661376       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:16:29.661380       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:16:29.662038       1 config.go:309] "Starting node config controller"
	I1129 09:16:29.662045       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:16:29.662051       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:16:29.761928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:16:29.761963       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:16:29.762002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a] <==
	I1129 09:16:19.543323       1 serving.go:386] Generated self-signed cert in-memory
	W1129 09:16:21.300344       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:16:21.300441       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:16:21.300474       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:16:21.300530       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:16:21.326670       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:16:21.326792       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:21.329496       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:16:21.329764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:21.329789       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:21.329807       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:16:21.430137       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:17:52 addons-937561 kubelet[1275]: I1129 09:17:52.461739    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03d42e3-f62a-4304-a8f4-ace9f9ad86d6" path="/var/lib/kubelet/pods/d03d42e3-f62a-4304-a8f4-ace9f9ad86d6/volumes"
	Nov 29 09:17:53 addons-937561 kubelet[1275]: I1129 09:17:53.966631    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-bz6gr" podStartSLOduration=50.702670189 podStartE2EDuration="1m15.966601152s" podCreationTimestamp="2025-11-29 09:16:38 +0000 UTC" firstStartedPulling="2025-11-29 09:17:27.828005 +0000 UTC m=+65.567959641" lastFinishedPulling="2025-11-29 09:17:53.091935964 +0000 UTC m=+90.831890604" observedRunningTime="2025-11-29 09:17:53.965817789 +0000 UTC m=+91.705772438" watchObservedRunningTime="2025-11-29 09:17:53.966601152 +0000 UTC m=+91.706555792"
	Nov 29 09:18:00 addons-937561 kubelet[1275]: I1129 09:18:00.124257    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-dz9wn" podStartSLOduration=67.481995308 podStartE2EDuration="1m27.124222334s" podCreationTimestamp="2025-11-29 09:16:33 +0000 UTC" firstStartedPulling="2025-11-29 09:17:38.209767499 +0000 UTC m=+75.949722148" lastFinishedPulling="2025-11-29 09:17:57.851994525 +0000 UTC m=+95.591949174" observedRunningTime="2025-11-29 09:17:58.004301463 +0000 UTC m=+95.744256153" watchObservedRunningTime="2025-11-29 09:18:00.124222334 +0000 UTC m=+97.864176975"
	Nov 29 09:18:00 addons-937561 kubelet[1275]: I1129 09:18:00.469413    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faab4aed-450f-40ef-b5d6-17f77e7d0074" path="/var/lib/kubelet/pods/faab4aed-450f-40ef-b5d6-17f77e7d0074/volumes"
	Nov 29 09:18:01 addons-937561 kubelet[1275]: I1129 09:18:01.631323    1275 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 29 09:18:01 addons-937561 kubelet[1275]: I1129 09:18:01.631396    1275 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 29 09:18:06 addons-937561 kubelet[1275]: I1129 09:18:06.459082    1275 scope.go:117] "RemoveContainer" containerID="e6c187a5ead0120836da917bca61434659d800e09f55fe809e66fe2903e0de0e"
	Nov 29 09:18:07 addons-937561 kubelet[1275]: I1129 09:18:07.044849    1275 scope.go:117] "RemoveContainer" containerID="e6c187a5ead0120836da917bca61434659d800e09f55fe809e66fe2903e0de0e"
	Nov 29 09:18:07 addons-937561 kubelet[1275]: I1129 09:18:07.064401    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-w96sq" podStartSLOduration=4.65592209 podStartE2EDuration="58.064382851s" podCreationTimestamp="2025-11-29 09:17:09 +0000 UTC" firstStartedPulling="2025-11-29 09:17:10.288206346 +0000 UTC m=+48.028160987" lastFinishedPulling="2025-11-29 09:18:03.696667099 +0000 UTC m=+101.436621748" observedRunningTime="2025-11-29 09:18:04.055418492 +0000 UTC m=+101.795373158" watchObservedRunningTime="2025-11-29 09:18:07.064382851 +0000 UTC m=+104.804337500"
	Nov 29 09:18:08 addons-937561 kubelet[1275]: I1129 09:18:08.170542    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm7hm\" (UniqueName: \"kubernetes.io/projected/a6716643-2ae0-4ee8-a12a-35af27e28e9c-kube-api-access-lm7hm\") pod \"a6716643-2ae0-4ee8-a12a-35af27e28e9c\" (UID: \"a6716643-2ae0-4ee8-a12a-35af27e28e9c\") "
	Nov 29 09:18:08 addons-937561 kubelet[1275]: I1129 09:18:08.173093    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6716643-2ae0-4ee8-a12a-35af27e28e9c-kube-api-access-lm7hm" (OuterVolumeSpecName: "kube-api-access-lm7hm") pod "a6716643-2ae0-4ee8-a12a-35af27e28e9c" (UID: "a6716643-2ae0-4ee8-a12a-35af27e28e9c"). InnerVolumeSpecName "kube-api-access-lm7hm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 29 09:18:08 addons-937561 kubelet[1275]: I1129 09:18:08.271008    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lm7hm\" (UniqueName: \"kubernetes.io/projected/a6716643-2ae0-4ee8-a12a-35af27e28e9c-kube-api-access-lm7hm\") on node \"addons-937561\" DevicePath \"\""
	Nov 29 09:18:09 addons-937561 kubelet[1275]: I1129 09:18:09.055028    1275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30c18f3e3a89b35d945478619646bc93653cf08a9fc5c955784e51bec683931c"
	Nov 29 09:18:13 addons-937561 kubelet[1275]: E1129 09:18:13.615329    1275 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 29 09:18:13 addons-937561 kubelet[1275]: E1129 09:18:13.615420    1275 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4407cb92-93a5-4523-b1da-d85a945d9fb8-gcr-creds podName:4407cb92-93a5-4523-b1da-d85a945d9fb8 nodeName:}" failed. No retries permitted until 2025-11-29 09:19:17.615402919 +0000 UTC m=+175.355357568 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/4407cb92-93a5-4523-b1da-d85a945d9fb8-gcr-creds") pod "registry-creds-764b6fb674-8q8xm" (UID: "4407cb92-93a5-4523-b1da-d85a945d9fb8") : secret "registry-creds-gcr" not found
	Nov 29 09:18:14 addons-937561 kubelet[1275]: W1129 09:18:14.751237    1275 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ff16db5210e7435b574bb56b67b8ebb5c38b1065b43e96433269bcb203d89e0f/crio-026780566ef704d173f45caaaa8685b6723d008d9e53fdd467f6f61762c15eb8 WatchSource:0}: Error finding container 026780566ef704d173f45caaaa8685b6723d008d9e53fdd467f6f61762c15eb8: Status 404 returned error can't find the container with id 026780566ef704d173f45caaaa8685b6723d008d9e53fdd467f6f61762c15eb8
	Nov 29 09:18:22 addons-937561 kubelet[1275]: I1129 09:18:22.380394    1275 scope.go:117] "RemoveContainer" containerID="d56113f5b214352914681c8b8d0ba6ae4dc9f6579e743e4de49689a57470dec8"
	Nov 29 09:18:22 addons-937561 kubelet[1275]: I1129 09:18:22.410895    1275 scope.go:117] "RemoveContainer" containerID="17fe880170e8a6033f9baa0069e6e39da5c3ee2a3bd0c8cac660d0d5b27c4c63"
	Nov 29 09:18:22 addons-937561 kubelet[1275]: E1129 09:18:22.567819    1275 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ba7f81e411d2a203ce1d8f7daa19b78b89757653d410fa41c87670ec90911246/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ba7f81e411d2a203ce1d8f7daa19b78b89757653d410fa41c87670ec90911246/diff: no such file or directory, extraDiskErr: <nil>
	Nov 29 09:18:22 addons-937561 kubelet[1275]: E1129 09:18:22.575800    1275 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8d6c10be2b8d9280d5ceec1e94028e8c3e7bc77c160651d5c904d534c53bbe5a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8d6c10be2b8d9280d5ceec1e94028e8c3e7bc77c160651d5c904d534c53bbe5a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 29 09:18:22 addons-937561 kubelet[1275]: E1129 09:18:22.577714    1275 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/84cefee1f71582e4409beb036952d7fe041dccfa6f676c4ffd2c80cb4d2b09c2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/84cefee1f71582e4409beb036952d7fe041dccfa6f676c4ffd2c80cb4d2b09c2/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-create-ndlk6_d03d42e3-f62a-4304-a8f4-ace9f9ad86d6/create/0.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-create-ndlk6_d03d42e3-f62a-4304-a8f4-ace9f9ad86d6/create/0.log: no such file or directory
	Nov 29 09:18:23 addons-937561 kubelet[1275]: I1129 09:18:23.915166    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-8gjmc" podStartSLOduration=103.804997076 podStartE2EDuration="1m49.915147374s" podCreationTimestamp="2025-11-29 09:16:34 +0000 UTC" firstStartedPulling="2025-11-29 09:18:14.752729548 +0000 UTC m=+112.492684189" lastFinishedPulling="2025-11-29 09:18:20.862879846 +0000 UTC m=+118.602834487" observedRunningTime="2025-11-29 09:18:21.124735077 +0000 UTC m=+118.864689717" watchObservedRunningTime="2025-11-29 09:18:23.915147374 +0000 UTC m=+121.655102023"
	Nov 29 09:18:24 addons-937561 kubelet[1275]: I1129 09:18:24.007353    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchxd\" (UniqueName: \"kubernetes.io/projected/d7157c2f-990a-4dba-877d-2f1f6dc08159-kube-api-access-zchxd\") pod \"busybox\" (UID: \"d7157c2f-990a-4dba-877d-2f1f6dc08159\") " pod="default/busybox"
	Nov 29 09:18:24 addons-937561 kubelet[1275]: I1129 09:18:24.007606    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d7157c2f-990a-4dba-877d-2f1f6dc08159-gcp-creds\") pod \"busybox\" (UID: \"d7157c2f-990a-4dba-877d-2f1f6dc08159\") " pod="default/busybox"
	Nov 29 09:18:27 addons-937561 kubelet[1275]: I1129 09:18:27.137213    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.028335484 podStartE2EDuration="4.13719454s" podCreationTimestamp="2025-11-29 09:18:23 +0000 UTC" firstStartedPulling="2025-11-29 09:18:24.254605896 +0000 UTC m=+121.994560537" lastFinishedPulling="2025-11-29 09:18:26.363464952 +0000 UTC m=+124.103419593" observedRunningTime="2025-11-29 09:18:27.135950971 +0000 UTC m=+124.875905620" watchObservedRunningTime="2025-11-29 09:18:27.13719454 +0000 UTC m=+124.877149181"
	
	
	==> storage-provisioner [d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c] <==
	W1129 09:18:10.788007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:12.791467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:12.798385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:14.801238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:14.805931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:16.809829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:16.817599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:18.820532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:18.826417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:20.829333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:20.836029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:22.839221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:22.846803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:24.850614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:24.855383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:26.858848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:26.863412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:28.865848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:28.870580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:30.873943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:30.878742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:32.881870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:32.890050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:34.893650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:34.899076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-937561 -n addons-937561
helpers_test.go:269: (dbg) Run:  kubectl --context addons-937561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q registry-creds-764b6fb674-8q8xm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-937561 describe pod ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q registry-creds-764b6fb674-8q8xm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-937561 describe pod ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q registry-creds-764b6fb674-8q8xm: exit status 1 (93.691911ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cnhs2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t6l5q" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-8q8xm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-937561 describe pod ingress-nginx-admission-create-cnhs2 ingress-nginx-admission-patch-t6l5q registry-creds-764b6fb674-8q8xm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.481316ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:36.715388  309581 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:36.716210  309581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:36.716224  309581 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:36.716231  309581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:36.716498  309581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:36.716787  309581 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:36.717158  309581 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:36.717177  309581 addons.go:622] checking whether the cluster is paused
	I1129 09:18:36.717287  309581 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:36.717303  309581 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:36.717810  309581 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:36.734857  309581 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:36.734915  309581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:36.753360  309581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:36.856927  309581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:36.857020  309581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:36.886923  309581 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:36.886952  309581 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:36.886957  309581 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:36.886961  309581 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:36.886965  309581 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:36.886970  309581 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:36.886973  309581 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:36.886976  309581 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:36.886979  309581 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:36.886985  309581 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:36.886989  309581 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:36.886991  309581 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:36.886995  309581 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:36.886998  309581 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:36.887001  309581 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:36.887006  309581 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:36.887010  309581 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:36.887014  309581 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:36.887017  309581 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:36.887020  309581 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:36.887025  309581 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:36.887028  309581 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:36.887031  309581 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:36.887034  309581 cri.go:89] found id: ""
	I1129 09:18:36.887087  309581 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:36.902370  309581 out.go:203] 
	W1129 09:18:36.905299  309581 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:36.905325  309581 out.go:285] * 
	* 
	W1129 09:18:36.911889  309581 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:36.914788  309581 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-42lcn" [0e122609-4e28-4e84-af00-4d9d11a9cf1d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003572591s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (267.161496ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:55.889380  310061 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:55.890217  310061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:55.890235  310061 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:55.890243  310061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:55.890532  310061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:55.890836  310061 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:55.891261  310061 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:55.891279  310061 addons.go:622] checking whether the cluster is paused
	I1129 09:18:55.891388  310061 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:55.891408  310061 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:55.891936  310061 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:55.920053  310061 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:55.920110  310061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:55.937612  310061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:56.041106  310061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:56.041209  310061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:56.072558  310061 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:56.072584  310061 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:56.072590  310061 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:56.072599  310061 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:56.072603  310061 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:56.072607  310061 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:56.072610  310061 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:56.072614  310061 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:56.072617  310061 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:56.072623  310061 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:56.072627  310061 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:56.072630  310061 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:56.072633  310061 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:56.072637  310061 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:56.072640  310061 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:56.072645  310061 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:56.072654  310061 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:56.072658  310061 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:56.072661  310061 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:56.072664  310061 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:56.072669  310061 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:56.072677  310061 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:56.072681  310061 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:56.072683  310061 cri.go:89] found id: ""
	I1129 09:18:56.072740  310061 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:56.090018  310061 out.go:203] 
	W1129 09:18:56.094241  310061 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:56.094279  310061 out.go:285] * 
	* 
	W1129 09:18:56.100650  310061 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:56.104741  310061 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-937561 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-937561 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-937561 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d41bb2a4-863a-4f6e-8b6e-f800b6055c2e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d41bb2a4-863a-4f6e-8b6e-f800b6055c2e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d41bb2a4-863a-4f6e-8b6e-f800b6055c2e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003521143s
addons_test.go:967: (dbg) Run:  kubectl --context addons-937561 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 ssh "cat /opt/local-path-provisioner/pvc-e16cf624-9fea-4565-93bd-22ce2cfea277_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-937561 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-937561 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (268.713043ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:57.821110  310196 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:57.822006  310196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:57.822052  310196 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:57.822118  310196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:57.822436  310196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:57.822781  310196 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:57.823212  310196 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:57.823259  310196 addons.go:622] checking whether the cluster is paused
	I1129 09:18:57.823398  310196 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:57.823434  310196 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:57.823972  310196 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:57.840791  310196 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:57.840849  310196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:57.859361  310196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:57.965120  310196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:57.965198  310196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:57.997988  310196 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:57.998011  310196 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:57.998016  310196 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:57.998019  310196 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:57.998023  310196 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:57.998026  310196 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:57.998029  310196 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:57.998032  310196 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:57.998035  310196 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:57.998041  310196 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:57.998044  310196 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:57.998047  310196 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:57.998051  310196 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:57.998054  310196 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:57.998059  310196 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:57.998064  310196 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:57.998067  310196 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:57.998071  310196 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:57.998103  310196 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:57.998106  310196 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:57.998112  310196 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:57.998115  310196 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:57.998119  310196 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:57.998126  310196 cri.go:89] found id: ""
	I1129 09:18:57.998177  310196 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:58.020426  310196 out.go:203] 
	W1129 09:18:58.023459  310196 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:58.023492  310196 out.go:285] * 
	* 
	W1129 09:18:58.030188  310196 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:58.033330  310196 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-2kd5l" [fae62791-4a33-4c22-8cb5-0bba319c7b03] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003509484s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (381.052825ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:49.321656  309785 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:49.322622  309785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:49.322670  309785 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:49.322691  309785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:49.323042  309785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:49.323692  309785 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:49.324874  309785 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:49.324917  309785 addons.go:622] checking whether the cluster is paused
	I1129 09:18:49.325064  309785 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:49.325440  309785 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:49.326045  309785 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:49.352583  309785 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:49.352636  309785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:49.393561  309785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:49.526028  309785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:49.526139  309785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:49.579743  309785 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:49.579767  309785 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:49.579772  309785 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:49.579776  309785 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:49.579784  309785 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:49.579788  309785 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:49.579791  309785 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:49.579795  309785 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:49.579798  309785 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:49.579807  309785 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:49.579810  309785 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:49.579813  309785 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:49.579816  309785 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:49.579819  309785 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:49.579822  309785 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:49.579831  309785 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:49.579838  309785 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:49.579843  309785 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:49.579846  309785 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:49.579849  309785 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:49.579854  309785 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:49.579856  309785 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:49.579859  309785 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:49.579863  309785 cri.go:89] found id: ""
	I1129 09:18:49.579912  309785 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:49.596529  309785 out.go:203] 
	W1129 09:18:49.599488  309785 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:49.599515  309785 out.go:285] * 
	* 
	W1129 09:18:49.607101  309785 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:49.610425  309785 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-n2vjf" [3fe6333f-a5dc-46b6-a91c-75bb05bf45bc] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003654808s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-937561 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-937561 addons disable yakd --alsologtostderr -v=1: exit status 11 (307.464538ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:42.987929  309642 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:42.988734  309642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:42.988762  309642 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:42.988780  309642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:42.989049  309642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:18:42.989366  309642 mustload.go:66] Loading cluster: addons-937561
	I1129 09:18:42.989793  309642 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:42.989827  309642 addons.go:622] checking whether the cluster is paused
	I1129 09:18:42.989971  309642 config.go:182] Loaded profile config "addons-937561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:42.990001  309642 host.go:66] Checking if "addons-937561" exists ...
	I1129 09:18:42.990555  309642 cli_runner.go:164] Run: docker container inspect addons-937561 --format={{.State.Status}}
	I1129 09:18:43.018920  309642 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:43.018991  309642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-937561
	I1129 09:18:43.040735  309642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/addons-937561/id_rsa Username:docker}
	I1129 09:18:43.152569  309642 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:43.152666  309642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:43.195743  309642 cri.go:89] found id: "3c1b8b66c425e950583324ebf42b8febdcd36ac6679fd050365d6b44158d6298"
	I1129 09:18:43.195766  309642 cri.go:89] found id: "9aef6b7b60e4c61b854226798286fa0f294e1bd9cd8023285a92675646bae4a0"
	I1129 09:18:43.195771  309642 cri.go:89] found id: "f225ca290de2866544f4266aab2994cbb542ce1b970c9b4595f00055fc40c360"
	I1129 09:18:43.195775  309642 cri.go:89] found id: "30eb3a8c8cd59bbf082af67205ec66fc831b71e5e4e15293cf493837e6f42fd8"
	I1129 09:18:43.195779  309642 cri.go:89] found id: "e6e40e77afa28c08fefc34c435ffec576f1360edf9ac33f83c9336d6c556cfa5"
	I1129 09:18:43.195784  309642 cri.go:89] found id: "11d43a48abd4bb9e5cd937543ff98c54e3eadf86a3eec4d1462d0295323c14a2"
	I1129 09:18:43.195787  309642 cri.go:89] found id: "ffd3ddcf27f552ebb0270b83bbfb7baab86c060ee8390e24624aaf7ac789c3e1"
	I1129 09:18:43.195790  309642 cri.go:89] found id: "af2e25ba5927658c8e58bf2049a11e64abc561695fecf718a590b1c50b5c45f4"
	I1129 09:18:43.195793  309642 cri.go:89] found id: "c8fe1df2373bbc27b465fd9de2b287df8b0b8e0b73778c2ce77cd3becd4e417d"
	I1129 09:18:43.195802  309642 cri.go:89] found id: "b9fd6b139f9a624d5155ab2e15c9a2bef2ce52630a03be5e1cf4cbbf192867b5"
	I1129 09:18:43.195806  309642 cri.go:89] found id: "66fc5abcc6517bba6b3ca8ca3ec6cc0d0637d7ee7f41538fd591ad3df77a25a7"
	I1129 09:18:43.195810  309642 cri.go:89] found id: "f8cb526e085ff4bfab67839a9d9698cddb536a085d430bf3b94cc35075cd4437"
	I1129 09:18:43.195818  309642 cri.go:89] found id: "0c0ef85d8b377b9706c6f2454bcd25662aeceeef6bb0df529bfd8d0d5d37325d"
	I1129 09:18:43.195821  309642 cri.go:89] found id: "5bc214d6f747ad71dc173e7de6938306a138c4e53ef930bbc04896ed5f8630df"
	I1129 09:18:43.195824  309642 cri.go:89] found id: "6159812cd62ca3d6e0f3d9163ef81e722e58a71e7ab3cfc4e163bb866cc7b3ce"
	I1129 09:18:43.195833  309642 cri.go:89] found id: "cea7127d80def50b029992a5053c0e13fa9f63b653e0baab4bd3e0c7a62fee57"
	I1129 09:18:43.195836  309642 cri.go:89] found id: "d8b057511cccc6e7c0bedc009a04433fb448bd8a14bf1e70b0ae5062fe1c102c"
	I1129 09:18:43.195843  309642 cri.go:89] found id: "febc943f90d57ca8f46b21ff421f46d4ce896f8605134cff36e1d3622e355fb9"
	I1129 09:18:43.195847  309642 cri.go:89] found id: "8f16da7a481b21c7d898af445c84ceedd74be7994224995aec4bfdc412549ea4"
	I1129 09:18:43.195850  309642 cri.go:89] found id: "1f72b846137bbc642e1ade4cf19a73ca4c586c6661f53912c713b9e87612a58a"
	I1129 09:18:43.195858  309642 cri.go:89] found id: "b28d6a65a1d2ecf2be5a951e6c22118eda9e59696483fdf9c90273b5f5db9672"
	I1129 09:18:43.195866  309642 cri.go:89] found id: "465f08cb21ea04097b034d3b01246d8b0718c825d63754b159891c61c959358a"
	I1129 09:18:43.195869  309642 cri.go:89] found id: "c0d24f1fa0e94a02c280f018ee32222492282580ee35e0f7041df6d82a2600a7"
	I1129 09:18:43.195872  309642 cri.go:89] found id: ""
	I1129 09:18:43.195923  309642 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:43.213974  309642 out.go:203] 
	W1129 09:18:43.216901  309642 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:43.216930  309642 out.go:285] * 
	* 
	W1129 09:18:43.223556  309642 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:43.226695  309642 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-937561 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-014829 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-014829 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-sjv8d" [766b0629-9f33-42de-95ae-98ef95269a7f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-014829 -n functional-014829
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-29 09:35:21.934290428 +0000 UTC m=+1207.590582845
functional_test.go:1645: (dbg) Run:  kubectl --context functional-014829 describe po hello-node-connect-7d85dfc575-sjv8d -n default
functional_test.go:1645: (dbg) kubectl --context functional-014829 describe po hello-node-connect-7d85dfc575-sjv8d -n default:
Name:             hello-node-connect-7d85dfc575-sjv8d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-014829/192.168.49.2
Start Time:       Sat, 29 Nov 2025 09:25:21 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qsp4h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qsp4h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-sjv8d to functional-014829
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-014829 logs hello-node-connect-7d85dfc575-sjv8d -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-014829 logs hello-node-connect-7d85dfc575-sjv8d -n default: exit status 1 (105.535787ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-sjv8d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-014829 logs hello-node-connect-7d85dfc575-sjv8d -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-014829 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-sjv8d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-014829/192.168.49.2
Start Time:       Sat, 29 Nov 2025 09:25:21 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qsp4h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qsp4h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-sjv8d to functional-014829
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-014829 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-014829 logs -l app=hello-node-connect: exit status 1 (87.299738ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-sjv8d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-014829 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-014829 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.80.248
IPs:                      10.100.80.248
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31252/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-014829
helpers_test.go:243: (dbg) docker inspect functional-014829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af2a68c6af6f1dc01136424b0edfd37514dfba8d3f08ffd372abe85669cf18ee",
	        "Created": "2025-11-29T09:22:47.288026383Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:22:47.346567446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/af2a68c6af6f1dc01136424b0edfd37514dfba8d3f08ffd372abe85669cf18ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af2a68c6af6f1dc01136424b0edfd37514dfba8d3f08ffd372abe85669cf18ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/af2a68c6af6f1dc01136424b0edfd37514dfba8d3f08ffd372abe85669cf18ee/hosts",
	        "LogPath": "/var/lib/docker/containers/af2a68c6af6f1dc01136424b0edfd37514dfba8d3f08ffd372abe85669cf18ee/af2a68c6af6f1dc01136424b0edfd37514dfba8d3f08ffd372abe85669cf18ee-json.log",
	        "Name": "/functional-014829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-014829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-014829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "af2a68c6af6f1dc01136424b0edfd37514dfba8d3f08ffd372abe85669cf18ee",
	                "LowerDir": "/var/lib/docker/overlay2/cedd01e0ed622a63ca77332b891c8c73fd7b4b2b7297b9ccbcabe8bbff673d7b-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cedd01e0ed622a63ca77332b891c8c73fd7b4b2b7297b9ccbcabe8bbff673d7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cedd01e0ed622a63ca77332b891c8c73fd7b4b2b7297b9ccbcabe8bbff673d7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cedd01e0ed622a63ca77332b891c8c73fd7b4b2b7297b9ccbcabe8bbff673d7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-014829",
	                "Source": "/var/lib/docker/volumes/functional-014829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-014829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-014829",
	                "name.minikube.sigs.k8s.io": "functional-014829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e153545fccdd5b68604d626905e22d7a65b5de3b0e810ad970898c18e060445",
	            "SandboxKey": "/var/run/docker/netns/0e153545fccd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-014829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:01:9a:0d:bc:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6b7d87b26b3f8292eaea18a85d633def96a169a223fd94150de4958ef9488244",
	                    "EndpointID": "99a43697fcafbe331b428bce48c681cea503e92b4252e0bb3c17f9e255f47d76",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-014829",
	                        "af2a68c6af6f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-014829 -n functional-014829
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 logs -n 25: (1.492715327s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-014829 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │ 29 Nov 25 09:24 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │ 29 Nov 25 09:24 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │ 29 Nov 25 09:24 UTC │
	│ kubectl │ functional-014829 kubectl -- --context functional-014829 get pods                                                          │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │ 29 Nov 25 09:24 UTC │
	│ start   │ -p functional-014829 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:24 UTC │ 29 Nov 25 09:25 UTC │
	│ service │ invalid-svc -p functional-014829                                                                                           │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │                     │
	│ cp      │ functional-014829 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ config  │ functional-014829 config unset cpus                                                                                        │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ config  │ functional-014829 config get cpus                                                                                          │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │                     │
	│ config  │ functional-014829 config set cpus 2                                                                                        │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ config  │ functional-014829 config get cpus                                                                                          │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ config  │ functional-014829 config unset cpus                                                                                        │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ ssh     │ functional-014829 ssh -n functional-014829 sudo cat /home/docker/cp-test.txt                                               │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ config  │ functional-014829 config get cpus                                                                                          │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │                     │
	│ ssh     │ functional-014829 ssh echo hello                                                                                           │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ cp      │ functional-014829 cp functional-014829:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1727312286/001/cp-test.txt │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ ssh     │ functional-014829 ssh cat /etc/hostname                                                                                    │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ ssh     │ functional-014829 ssh -n functional-014829 sudo cat /home/docker/cp-test.txt                                               │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ tunnel  │ functional-014829 tunnel --alsologtostderr                                                                                 │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │                     │
	│ tunnel  │ functional-014829 tunnel --alsologtostderr                                                                                 │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │                     │
	│ cp      │ functional-014829 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ tunnel  │ functional-014829 tunnel --alsologtostderr                                                                                 │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │                     │
	│ ssh     │ functional-014829 ssh -n functional-014829 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ addons  │ functional-014829 addons list                                                                                              │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	│ addons  │ functional-014829 addons list -o json                                                                                      │ functional-014829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:25 UTC │ 29 Nov 25 09:25 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:24:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:24:32.615252  322037 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:24:32.615659  322037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:24:32.615664  322037 out.go:374] Setting ErrFile to fd 2...
	I1129 09:24:32.615669  322037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:24:32.616128  322037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:24:32.616648  322037 out.go:368] Setting JSON to false
	I1129 09:24:32.617587  322037 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7622,"bootTime":1764400651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 09:24:32.617713  322037 start.go:143] virtualization:  
	I1129 09:24:32.621224  322037 out.go:179] * [functional-014829] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:24:32.623395  322037 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:24:32.623526  322037 notify.go:221] Checking for updates...
	I1129 09:24:32.629116  322037 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:24:32.631907  322037 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:24:32.634797  322037 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 09:24:32.637594  322037 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:24:32.640574  322037 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:24:32.644103  322037 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:24:32.644198  322037 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:24:32.667499  322037 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:24:32.667614  322037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:24:32.727407  322037 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-29 09:24:32.718490985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:24:32.727502  322037 docker.go:319] overlay module found
	I1129 09:24:32.730704  322037 out.go:179] * Using the docker driver based on existing profile
	I1129 09:24:32.733552  322037 start.go:309] selected driver: docker
	I1129 09:24:32.733560  322037 start.go:927] validating driver "docker" against &{Name:functional-014829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-014829 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:32.733657  322037 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:24:32.733793  322037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:24:32.787359  322037 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-29 09:24:32.777676503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:24:32.787758  322037 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:32.787785  322037 cni.go:84] Creating CNI manager for ""
	I1129 09:24:32.787842  322037 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:24:32.787884  322037 start.go:353] cluster config:
	{Name:functional-014829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-014829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:32.791153  322037 out.go:179] * Starting "functional-014829" primary control-plane node in "functional-014829" cluster
	I1129 09:24:32.794050  322037 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:24:32.797044  322037 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:24:32.799892  322037 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:24:32.799938  322037 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 09:24:32.799962  322037 cache.go:65] Caching tarball of preloaded images
	I1129 09:24:32.799962  322037 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:24:32.800053  322037 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 09:24:32.800061  322037 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:24:32.800172  322037 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/config.json ...
	I1129 09:24:32.817890  322037 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:24:32.817901  322037 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:24:32.817914  322037 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:24:32.817942  322037 start.go:360] acquireMachinesLock for functional-014829: {Name:mkfe0eee8cab75beba1f78f3627f2adfea13605e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:24:32.818020  322037 start.go:364] duration metric: took 62.942µs to acquireMachinesLock for "functional-014829"
	I1129 09:24:32.818038  322037 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:24:32.818042  322037 fix.go:54] fixHost starting: 
	I1129 09:24:32.818326  322037 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
	I1129 09:24:32.835114  322037 fix.go:112] recreateIfNeeded on functional-014829: state=Running err=<nil>
	W1129 09:24:32.835134  322037 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:24:32.838302  322037 out.go:252] * Updating the running docker "functional-014829" container ...
	I1129 09:24:32.838327  322037 machine.go:94] provisionDockerMachine start ...
	I1129 09:24:32.838412  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:32.855554  322037 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:32.855877  322037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1129 09:24:32.855883  322037 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:24:33.005534  322037 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-014829
	
	I1129 09:24:33.005547  322037 ubuntu.go:182] provisioning hostname "functional-014829"
	I1129 09:24:33.005626  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:33.040301  322037 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:33.040596  322037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1129 09:24:33.040605  322037 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-014829 && echo "functional-014829" | sudo tee /etc/hostname
	I1129 09:24:33.203684  322037 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-014829
	
	I1129 09:24:33.203766  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:33.231842  322037 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:33.232180  322037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1129 09:24:33.232195  322037 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-014829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-014829/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-014829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:24:33.382532  322037 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:24:33.382554  322037 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 09:24:33.382578  322037 ubuntu.go:190] setting up certificates
	I1129 09:24:33.382587  322037 provision.go:84] configureAuth start
	I1129 09:24:33.382668  322037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-014829
	I1129 09:24:33.401990  322037 provision.go:143] copyHostCerts
	I1129 09:24:33.402061  322037 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 09:24:33.402168  322037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 09:24:33.402246  322037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 09:24:33.402345  322037 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 09:24:33.402349  322037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 09:24:33.402373  322037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 09:24:33.402421  322037 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 09:24:33.402424  322037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 09:24:33.402446  322037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 09:24:33.402531  322037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.functional-014829 san=[127.0.0.1 192.168.49.2 functional-014829 localhost minikube]
	I1129 09:24:33.579639  322037 provision.go:177] copyRemoteCerts
	I1129 09:24:33.579694  322037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:24:33.579738  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:33.597823  322037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:24:33.705645  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:24:33.722745  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:24:33.740419  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:24:33.758020  322037 provision.go:87] duration metric: took 375.4116ms to configureAuth
	I1129 09:24:33.758038  322037 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:24:33.758249  322037 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:24:33.758346  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:33.775269  322037 main.go:143] libmachine: Using SSH client type: native
	I1129 09:24:33.775580  322037 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1129 09:24:33.775591  322037 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:24:39.186187  322037 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:24:39.186203  322037 machine.go:97] duration metric: took 6.347869348s to provisionDockerMachine
	I1129 09:24:39.186213  322037 start.go:293] postStartSetup for "functional-014829" (driver="docker")
	I1129 09:24:39.186222  322037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:24:39.186301  322037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:24:39.186343  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:39.204619  322037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:24:39.309764  322037 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:24:39.313128  322037 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:24:39.313145  322037 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:24:39.313155  322037 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 09:24:39.313210  322037 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 09:24:39.313284  322037 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 09:24:39.313361  322037 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/test/nested/copy/302182/hosts -> hosts in /etc/test/nested/copy/302182
	I1129 09:24:39.313412  322037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/302182
	I1129 09:24:39.321079  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 09:24:39.338697  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/test/nested/copy/302182/hosts --> /etc/test/nested/copy/302182/hosts (40 bytes)
	I1129 09:24:39.356588  322037 start.go:296] duration metric: took 170.359639ms for postStartSetup
	I1129 09:24:39.356679  322037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:24:39.356716  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:39.373515  322037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:24:39.475253  322037 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:24:39.480365  322037 fix.go:56] duration metric: took 6.662314985s for fixHost
	I1129 09:24:39.480380  322037 start.go:83] releasing machines lock for "functional-014829", held for 6.662353197s
	I1129 09:24:39.480455  322037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-014829
	I1129 09:24:39.497334  322037 ssh_runner.go:195] Run: cat /version.json
	I1129 09:24:39.497379  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:39.497645  322037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:24:39.497693  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:39.522274  322037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:24:39.526189  322037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:24:39.629695  322037 ssh_runner.go:195] Run: systemctl --version
	I1129 09:24:39.722945  322037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:24:39.758952  322037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:24:39.763140  322037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:24:39.763204  322037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:24:39.771021  322037 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:24:39.771036  322037 start.go:496] detecting cgroup driver to use...
	I1129 09:24:39.771067  322037 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 09:24:39.771115  322037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:24:39.786719  322037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:24:39.799901  322037 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:24:39.799954  322037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:24:39.815855  322037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:24:39.828987  322037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:24:39.970034  322037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:24:40.152050  322037 docker.go:234] disabling docker service ...
	I1129 09:24:40.152111  322037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:24:40.169628  322037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:24:40.188126  322037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:24:40.340642  322037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:24:40.491657  322037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:24:40.504742  322037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:24:40.520382  322037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:24:40.520452  322037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:24:40.529813  322037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 09:24:40.529880  322037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:24:40.539303  322037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:24:40.547983  322037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:24:40.556641  322037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:24:40.564668  322037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:24:40.574129  322037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:24:40.582726  322037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:24:40.591344  322037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:24:40.599030  322037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:24:40.606499  322037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:40.738751  322037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:24:40.998933  322037 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:24:40.998991  322037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:24:41.002692  322037 start.go:564] Will wait 60s for crictl version
	I1129 09:24:41.002746  322037 ssh_runner.go:195] Run: which crictl
	I1129 09:24:41.007295  322037 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:24:41.041212  322037 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:24:41.041306  322037 ssh_runner.go:195] Run: crio --version
	I1129 09:24:41.074887  322037 ssh_runner.go:195] Run: crio --version
	I1129 09:24:41.105963  322037 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:24:41.108930  322037 cli_runner.go:164] Run: docker network inspect functional-014829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:24:41.126053  322037 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1129 09:24:41.135836  322037 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1129 09:24:41.138812  322037 kubeadm.go:884] updating cluster {Name:functional-014829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-014829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:24:41.138946  322037 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:24:41.139016  322037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:24:41.172428  322037 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:24:41.172440  322037 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:24:41.172501  322037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:24:41.227360  322037 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:24:41.227372  322037 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:24:41.227378  322037 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1129 09:24:41.227491  322037 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-014829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-014829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:24:41.227595  322037 ssh_runner.go:195] Run: crio config
	I1129 09:24:41.298638  322037 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1129 09:24:41.298657  322037 cni.go:84] Creating CNI manager for ""
	I1129 09:24:41.298666  322037 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:24:41.298674  322037 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:24:41.298695  322037 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-014829 NodeName:functional-014829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:24:41.298821  322037 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-014829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:24:41.298885  322037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:24:41.310789  322037 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:24:41.310863  322037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:24:41.321733  322037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:24:41.336712  322037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:24:41.351305  322037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1129 09:24:41.364832  322037 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:24:41.368726  322037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:41.512499  322037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:24:41.525219  322037 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829 for IP: 192.168.49.2
	I1129 09:24:41.525230  322037 certs.go:195] generating shared ca certs ...
	I1129 09:24:41.525257  322037 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:41.525398  322037 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 09:24:41.525444  322037 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 09:24:41.525449  322037 certs.go:257] generating profile certs ...
	I1129 09:24:41.525532  322037 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.key
	I1129 09:24:41.525597  322037 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/apiserver.key.4857a1e9
	I1129 09:24:41.525637  322037 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/proxy-client.key
	I1129 09:24:41.525738  322037 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 09:24:41.525767  322037 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 09:24:41.525775  322037 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:24:41.525800  322037 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:24:41.525821  322037 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:24:41.525843  322037 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 09:24:41.525889  322037 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 09:24:41.526570  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:24:41.544921  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:24:41.561897  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:24:41.579179  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:24:41.596441  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:24:41.614354  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:24:41.632247  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:24:41.650482  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:24:41.668363  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:24:41.686823  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 09:24:41.704753  322037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 09:24:41.722522  322037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:24:41.735785  322037 ssh_runner.go:195] Run: openssl version
	I1129 09:24:41.742186  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:24:41.750754  322037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:41.754384  322037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:41.754439  322037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:24:41.795190  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:24:41.803041  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 09:24:41.811577  322037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 09:24:41.815567  322037 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 09:24:41.815634  322037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 09:24:41.856840  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 09:24:41.865117  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 09:24:41.873666  322037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 09:24:41.877581  322037 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 09:24:41.877634  322037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 09:24:41.919644  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:24:41.927611  322037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:24:41.931292  322037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:24:41.977045  322037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:24:42.019835  322037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:24:42.070949  322037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:24:42.116670  322037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:24:42.164361  322037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:24:42.208105  322037 kubeadm.go:401] StartCluster: {Name:functional-014829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-014829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:24:42.208189  322037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:24:42.208254  322037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:24:42.257338  322037 cri.go:89] found id: "03a4e729e9c3e4e86b98aa6941291ada24e6cd8c1dead65536bc1086e7e52090"
	I1129 09:24:42.257350  322037 cri.go:89] found id: "ce9301f2b209f99a8739dab2f290a480232e4a2a065c5a4142e221271d0e6ac1"
	I1129 09:24:42.257353  322037 cri.go:89] found id: "37cd1c2d49a7138aededfa39a7418964ff8f3f1734fa1b891b53304fd1da0efe"
	I1129 09:24:42.257356  322037 cri.go:89] found id: "2d1065d29e9930f5eddcd282046520c04d8e1ccd4ff0fe1a75b7d1d62d156938"
	I1129 09:24:42.257358  322037 cri.go:89] found id: "877a4960942a0b02d1c53c5d5daa693193b2f21386d72be8cb459f25084e7cbe"
	I1129 09:24:42.257360  322037 cri.go:89] found id: "529736bb6e8b851a16bad1aed295fe21f77cd931176805dc28eb3cf63538d52b"
	I1129 09:24:42.257362  322037 cri.go:89] found id: "eff0cc2d532cbb2fa06f918890a448b707058dead066d71d63a0e098aa0ff37e"
	I1129 09:24:42.257364  322037 cri.go:89] found id: "fc9fd3e84e009e9ca1504f76c781cf87bf4ae29c52811fbf8acdb1c311c15aa4"
	I1129 09:24:42.257366  322037 cri.go:89] found id: ""
	I1129 09:24:42.257417  322037 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:24:42.271786  322037 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:24:42Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:24:42.271864  322037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:24:42.280451  322037 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:24:42.280462  322037 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:24:42.280530  322037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:24:42.288700  322037 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:24:42.289238  322037 kubeconfig.go:125] found "functional-014829" server: "https://192.168.49.2:8441"
	I1129 09:24:42.290749  322037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:24:42.300347  322037 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-29 09:22:55.263579551 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-29 09:24:41.361060075 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1129 09:24:42.300362  322037 kubeadm.go:1161] stopping kube-system containers ...
	I1129 09:24:42.300375  322037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1129 09:24:42.300439  322037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:24:42.330929  322037 cri.go:89] found id: "03a4e729e9c3e4e86b98aa6941291ada24e6cd8c1dead65536bc1086e7e52090"
	I1129 09:24:42.330942  322037 cri.go:89] found id: "ce9301f2b209f99a8739dab2f290a480232e4a2a065c5a4142e221271d0e6ac1"
	I1129 09:24:42.330946  322037 cri.go:89] found id: "37cd1c2d49a7138aededfa39a7418964ff8f3f1734fa1b891b53304fd1da0efe"
	I1129 09:24:42.330949  322037 cri.go:89] found id: "2d1065d29e9930f5eddcd282046520c04d8e1ccd4ff0fe1a75b7d1d62d156938"
	I1129 09:24:42.330952  322037 cri.go:89] found id: "877a4960942a0b02d1c53c5d5daa693193b2f21386d72be8cb459f25084e7cbe"
	I1129 09:24:42.330955  322037 cri.go:89] found id: "529736bb6e8b851a16bad1aed295fe21f77cd931176805dc28eb3cf63538d52b"
	I1129 09:24:42.330957  322037 cri.go:89] found id: "eff0cc2d532cbb2fa06f918890a448b707058dead066d71d63a0e098aa0ff37e"
	I1129 09:24:42.330959  322037 cri.go:89] found id: "fc9fd3e84e009e9ca1504f76c781cf87bf4ae29c52811fbf8acdb1c311c15aa4"
	I1129 09:24:42.330962  322037 cri.go:89] found id: ""
	I1129 09:24:42.330967  322037 cri.go:252] Stopping containers: [03a4e729e9c3e4e86b98aa6941291ada24e6cd8c1dead65536bc1086e7e52090 ce9301f2b209f99a8739dab2f290a480232e4a2a065c5a4142e221271d0e6ac1 37cd1c2d49a7138aededfa39a7418964ff8f3f1734fa1b891b53304fd1da0efe 2d1065d29e9930f5eddcd282046520c04d8e1ccd4ff0fe1a75b7d1d62d156938 877a4960942a0b02d1c53c5d5daa693193b2f21386d72be8cb459f25084e7cbe 529736bb6e8b851a16bad1aed295fe21f77cd931176805dc28eb3cf63538d52b eff0cc2d532cbb2fa06f918890a448b707058dead066d71d63a0e098aa0ff37e fc9fd3e84e009e9ca1504f76c781cf87bf4ae29c52811fbf8acdb1c311c15aa4]
	I1129 09:24:42.331025  322037 ssh_runner.go:195] Run: which crictl
	I1129 09:24:42.335236  322037 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 03a4e729e9c3e4e86b98aa6941291ada24e6cd8c1dead65536bc1086e7e52090 ce9301f2b209f99a8739dab2f290a480232e4a2a065c5a4142e221271d0e6ac1 37cd1c2d49a7138aededfa39a7418964ff8f3f1734fa1b891b53304fd1da0efe 2d1065d29e9930f5eddcd282046520c04d8e1ccd4ff0fe1a75b7d1d62d156938 877a4960942a0b02d1c53c5d5daa693193b2f21386d72be8cb459f25084e7cbe 529736bb6e8b851a16bad1aed295fe21f77cd931176805dc28eb3cf63538d52b eff0cc2d532cbb2fa06f918890a448b707058dead066d71d63a0e098aa0ff37e fc9fd3e84e009e9ca1504f76c781cf87bf4ae29c52811fbf8acdb1c311c15aa4
	I1129 09:24:42.403820  322037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1129 09:24:42.521013  322037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:24:42.529004  322037 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Nov 29 09:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 29 09:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 29 09:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov 29 09:23 /etc/kubernetes/scheduler.conf
	
	I1129 09:24:42.529073  322037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1129 09:24:42.537435  322037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1129 09:24:42.545053  322037 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:24:42.545119  322037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:24:42.552574  322037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1129 09:24:42.560350  322037 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:24:42.560410  322037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:24:42.567939  322037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1129 09:24:42.575575  322037 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:24:42.575645  322037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:24:42.582912  322037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:24:42.590770  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:24:42.644151  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:24:43.967468  322037 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.323291187s)
	I1129 09:24:43.967535  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:24:44.190778  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:24:44.251655  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:24:44.322713  322037 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:24:44.322800  322037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:44.823545  322037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:45.322864  322037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:45.343875  322037 api_server.go:72] duration metric: took 1.021162737s to wait for apiserver process to appear ...
	I1129 09:24:45.343892  322037 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:24:45.343910  322037 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1129 09:24:49.741018  322037 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:24:49.741034  322037 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:24:49.741046  322037 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1129 09:24:49.766253  322037 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:24:49.766268  322037 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:24:49.844499  322037 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1129 09:24:49.899874  322037 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:24:49.899894  322037 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:24:50.344052  322037 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1129 09:24:50.363403  322037 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:24:50.363422  322037 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:24:50.844807  322037 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1129 09:24:50.855269  322037 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:24:50.855290  322037 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:24:51.344939  322037 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1129 09:24:51.353640  322037 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1129 09:24:51.367397  322037 api_server.go:141] control plane version: v1.34.1
	I1129 09:24:51.367415  322037 api_server.go:131] duration metric: took 6.023517846s to wait for apiserver health ...
	I1129 09:24:51.367424  322037 cni.go:84] Creating CNI manager for ""
	I1129 09:24:51.367430  322037 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:24:51.371165  322037 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:24:51.374206  322037 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:24:51.378348  322037 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:24:51.378358  322037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:24:51.392106  322037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:24:51.862545  322037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:24:51.874815  322037 system_pods.go:59] 8 kube-system pods found
	I1129 09:24:51.874842  322037 system_pods.go:61] "coredns-66bc5c9577-d4pzs" [69c3b8d0-5b71-487a-8850-85d5365c6c92] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:51.874850  322037 system_pods.go:61] "etcd-functional-014829" [7a23c6ba-28c4-4cb1-8e77-d74c5a59316a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:24:51.874854  322037 system_pods.go:61] "kindnet-56krd" [348ef737-00d4-4e56-bcc9-1c36249d98b1] Running
	I1129 09:24:51.874860  322037 system_pods.go:61] "kube-apiserver-functional-014829" [898216dd-1df7-45b8-9a9f-c000f240f7ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:24:51.874865  322037 system_pods.go:61] "kube-controller-manager-functional-014829" [6345afa9-ef64-4c34-a070-03cabe78b29d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:24:51.874869  322037 system_pods.go:61] "kube-proxy-n4twp" [3d8ad1c7-61cb-44dd-ae29-3685394c136c] Running
	I1129 09:24:51.874874  322037 system_pods.go:61] "kube-scheduler-functional-014829" [40ce0533-d18f-40a6-a61a-5b544c04edb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:24:51.874876  322037 system_pods.go:61] "storage-provisioner" [eab93c6c-acc6-4ab9-b500-3f1e5eeaa580] Running
	I1129 09:24:51.874881  322037 system_pods.go:74] duration metric: took 12.327123ms to wait for pod list to return data ...
	I1129 09:24:51.874888  322037 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:24:51.879125  322037 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:24:51.879144  322037 node_conditions.go:123] node cpu capacity is 2
	I1129 09:24:51.879155  322037 node_conditions.go:105] duration metric: took 4.263634ms to run NodePressure ...
	I1129 09:24:51.879213  322037 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:24:52.139618  322037 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1129 09:24:52.142692  322037 kubeadm.go:744] kubelet initialised
	I1129 09:24:52.142702  322037 kubeadm.go:745] duration metric: took 3.068892ms waiting for restarted kubelet to initialise ...
	I1129 09:24:52.142717  322037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:24:52.152487  322037 ops.go:34] apiserver oom_adj: -16
	I1129 09:24:52.152499  322037 kubeadm.go:602] duration metric: took 9.872031129s to restartPrimaryControlPlane
	I1129 09:24:52.152521  322037 kubeadm.go:403] duration metric: took 9.944412746s to StartCluster
	I1129 09:24:52.152537  322037 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:52.152609  322037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:24:52.153232  322037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:24:52.153465  322037 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:24:52.153715  322037 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:24:52.153765  322037 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:24:52.153845  322037 addons.go:70] Setting storage-provisioner=true in profile "functional-014829"
	I1129 09:24:52.153858  322037 addons.go:239] Setting addon storage-provisioner=true in "functional-014829"
	W1129 09:24:52.153863  322037 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:24:52.153882  322037 host.go:66] Checking if "functional-014829" exists ...
	I1129 09:24:52.153921  322037 addons.go:70] Setting default-storageclass=true in profile "functional-014829"
	I1129 09:24:52.153936  322037 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-014829"
	I1129 09:24:52.154261  322037 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
	I1129 09:24:52.154537  322037 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
	I1129 09:24:52.156858  322037 out.go:179] * Verifying Kubernetes components...
	I1129 09:24:52.160076  322037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:24:52.189959  322037 addons.go:239] Setting addon default-storageclass=true in "functional-014829"
	W1129 09:24:52.189970  322037 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:24:52.189992  322037 host.go:66] Checking if "functional-014829" exists ...
	I1129 09:24:52.190430  322037 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
	I1129 09:24:52.196662  322037 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:24:52.199614  322037 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:24:52.199625  322037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:24:52.199693  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:52.219740  322037 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:24:52.219754  322037 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:24:52.219819  322037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:24:52.256799  322037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:24:52.266231  322037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:24:52.377382  322037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:24:52.388544  322037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:24:52.404557  322037 node_ready.go:35] waiting up to 6m0s for node "functional-014829" to be "Ready" ...
	I1129 09:24:52.412341  322037 node_ready.go:49] node "functional-014829" is "Ready"
	I1129 09:24:52.412358  322037 node_ready.go:38] duration metric: took 7.759313ms for node "functional-014829" to be "Ready" ...
	I1129 09:24:52.412370  322037 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:24:52.412436  322037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:24:52.418746  322037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:24:53.290404  322037 api_server.go:72] duration metric: took 1.136913317s to wait for apiserver process to appear ...
	I1129 09:24:53.290415  322037 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:24:53.290431  322037 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1129 09:24:53.307099  322037 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1129 09:24:53.308156  322037 api_server.go:141] control plane version: v1.34.1
	I1129 09:24:53.308169  322037 api_server.go:131] duration metric: took 17.749066ms to wait for apiserver health ...
	I1129 09:24:53.308177  322037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:24:53.310463  322037 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:24:53.311821  322037 system_pods.go:59] 8 kube-system pods found
	I1129 09:24:53.311837  322037 system_pods.go:61] "coredns-66bc5c9577-d4pzs" [69c3b8d0-5b71-487a-8850-85d5365c6c92] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:53.311843  322037 system_pods.go:61] "etcd-functional-014829" [7a23c6ba-28c4-4cb1-8e77-d74c5a59316a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:24:53.311849  322037 system_pods.go:61] "kindnet-56krd" [348ef737-00d4-4e56-bcc9-1c36249d98b1] Running
	I1129 09:24:53.311855  322037 system_pods.go:61] "kube-apiserver-functional-014829" [898216dd-1df7-45b8-9a9f-c000f240f7ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:24:53.311860  322037 system_pods.go:61] "kube-controller-manager-functional-014829" [6345afa9-ef64-4c34-a070-03cabe78b29d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:24:53.311863  322037 system_pods.go:61] "kube-proxy-n4twp" [3d8ad1c7-61cb-44dd-ae29-3685394c136c] Running
	I1129 09:24:53.311869  322037 system_pods.go:61] "kube-scheduler-functional-014829" [40ce0533-d18f-40a6-a61a-5b544c04edb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:24:53.311871  322037 system_pods.go:61] "storage-provisioner" [eab93c6c-acc6-4ab9-b500-3f1e5eeaa580] Running
	I1129 09:24:53.311876  322037 system_pods.go:74] duration metric: took 3.69492ms to wait for pod list to return data ...
	I1129 09:24:53.311883  322037 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:24:53.313442  322037 addons.go:530] duration metric: took 1.159675884s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:24:53.314527  322037 default_sa.go:45] found service account: "default"
	I1129 09:24:53.314537  322037 default_sa.go:55] duration metric: took 2.649375ms for default service account to be created ...
	I1129 09:24:53.314544  322037 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:24:53.317162  322037 system_pods.go:86] 8 kube-system pods found
	I1129 09:24:53.317179  322037 system_pods.go:89] "coredns-66bc5c9577-d4pzs" [69c3b8d0-5b71-487a-8850-85d5365c6c92] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:24:53.317187  322037 system_pods.go:89] "etcd-functional-014829" [7a23c6ba-28c4-4cb1-8e77-d74c5a59316a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:24:53.317191  322037 system_pods.go:89] "kindnet-56krd" [348ef737-00d4-4e56-bcc9-1c36249d98b1] Running
	I1129 09:24:53.317197  322037 system_pods.go:89] "kube-apiserver-functional-014829" [898216dd-1df7-45b8-9a9f-c000f240f7ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:24:53.317202  322037 system_pods.go:89] "kube-controller-manager-functional-014829" [6345afa9-ef64-4c34-a070-03cabe78b29d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:24:53.317208  322037 system_pods.go:89] "kube-proxy-n4twp" [3d8ad1c7-61cb-44dd-ae29-3685394c136c] Running
	I1129 09:24:53.317213  322037 system_pods.go:89] "kube-scheduler-functional-014829" [40ce0533-d18f-40a6-a61a-5b544c04edb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:24:53.317215  322037 system_pods.go:89] "storage-provisioner" [eab93c6c-acc6-4ab9-b500-3f1e5eeaa580] Running
	I1129 09:24:53.317220  322037 system_pods.go:126] duration metric: took 2.672825ms to wait for k8s-apps to be running ...
	I1129 09:24:53.317227  322037 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:24:53.317285  322037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:24:53.330665  322037 system_svc.go:56] duration metric: took 13.427881ms WaitForService to wait for kubelet
	I1129 09:24:53.330683  322037 kubeadm.go:587] duration metric: took 1.177198215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:24:53.330711  322037 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:24:53.333872  322037 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 09:24:53.333888  322037 node_conditions.go:123] node cpu capacity is 2
	I1129 09:24:53.333898  322037 node_conditions.go:105] duration metric: took 3.183544ms to run NodePressure ...
	I1129 09:24:53.333911  322037 start.go:242] waiting for startup goroutines ...
	I1129 09:24:53.333917  322037 start.go:247] waiting for cluster config update ...
	I1129 09:24:53.333926  322037 start.go:256] writing updated cluster config ...
	I1129 09:24:53.334240  322037 ssh_runner.go:195] Run: rm -f paused
	I1129 09:24:53.338300  322037 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:24:53.341527  322037 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d4pzs" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:24:55.347267  322037 pod_ready.go:104] pod "coredns-66bc5c9577-d4pzs" is not "Ready", error: <nil>
	I1129 09:24:57.353494  322037 pod_ready.go:94] pod "coredns-66bc5c9577-d4pzs" is "Ready"
	I1129 09:24:57.353509  322037 pod_ready.go:86] duration metric: took 4.011968522s for pod "coredns-66bc5c9577-d4pzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:57.361359  322037 pod_ready.go:83] waiting for pod "etcd-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:57.368148  322037 pod_ready.go:94] pod "etcd-functional-014829" is "Ready"
	I1129 09:24:57.368164  322037 pod_ready.go:86] duration metric: took 6.789749ms for pod "etcd-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:57.460095  322037 pod_ready.go:83] waiting for pod "kube-apiserver-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:57.465017  322037 pod_ready.go:94] pod "kube-apiserver-functional-014829" is "Ready"
	I1129 09:24:57.465031  322037 pod_ready.go:86] duration metric: took 4.92267ms for pod "kube-apiserver-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:24:57.467303  322037 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:24:59.473113  322037 pod_ready.go:104] pod "kube-controller-manager-functional-014829" is not "Ready", error: <nil>
	I1129 09:25:01.973140  322037 pod_ready.go:94] pod "kube-controller-manager-functional-014829" is "Ready"
	I1129 09:25:01.973156  322037 pod_ready.go:86] duration metric: took 4.505840635s for pod "kube-controller-manager-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:25:01.976006  322037 pod_ready.go:83] waiting for pod "kube-proxy-n4twp" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:25:01.981176  322037 pod_ready.go:94] pod "kube-proxy-n4twp" is "Ready"
	I1129 09:25:01.981190  322037 pod_ready.go:86] duration metric: took 5.171198ms for pod "kube-proxy-n4twp" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:25:01.983550  322037 pod_ready.go:83] waiting for pod "kube-scheduler-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:25:02.345445  322037 pod_ready.go:94] pod "kube-scheduler-functional-014829" is "Ready"
	I1129 09:25:02.345460  322037 pod_ready.go:86] duration metric: took 361.896969ms for pod "kube-scheduler-functional-014829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:25:02.345470  322037 pod_ready.go:40] duration metric: took 9.007148873s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:25:02.398309  322037 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 09:25:02.403422  322037 out.go:179] * Done! kubectl is now configured to use "functional-014829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:25:37 functional-014829 crio[3532]: time="2025-11-29T09:25:37.366692364Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-h44lv Namespace:default ID:10fe2e2d7609f87b96b46867f47d9a998a7a2380f8b82a4721978d04516d426e UID:76d01e64-a4ec-4d8e-9a9c-7bb91202df49 NetNS:/var/run/netns/a2fb3809-706c-4e27-a89b-cdcf1746ccc4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004a4ce8}] Aliases:map[]}"
	Nov 29 09:25:37 functional-014829 crio[3532]: time="2025-11-29T09:25:37.3668657Z" level=info msg="Checking pod default_hello-node-75c85bcc94-h44lv for CNI network kindnet (type=ptp)"
	Nov 29 09:25:37 functional-014829 crio[3532]: time="2025-11-29T09:25:37.370656974Z" level=info msg="Ran pod sandbox 10fe2e2d7609f87b96b46867f47d9a998a7a2380f8b82a4721978d04516d426e with infra container: default/hello-node-75c85bcc94-h44lv/POD" id=e74855e2-e04b-480f-83ab-e308fc7707fe name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:25:37 functional-014829 crio[3532]: time="2025-11-29T09:25:37.372199305Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=aae6124f-6cc9-4aeb-8118-6f717f09cbb0 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.313367417Z" level=info msg="Stopping pod sandbox: 26509779bfca58343073fc92d3ee21f0892aee0e2b62f5bd866cf69b3fb33581" id=42a9f789-cac2-434e-9f64-4d3c7614c4b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.313444628Z" level=info msg="Stopped pod sandbox (already stopped): 26509779bfca58343073fc92d3ee21f0892aee0e2b62f5bd866cf69b3fb33581" id=42a9f789-cac2-434e-9f64-4d3c7614c4b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.314215191Z" level=info msg="Removing pod sandbox: 26509779bfca58343073fc92d3ee21f0892aee0e2b62f5bd866cf69b3fb33581" id=cc8af5f2-1fce-4497-94a7-a9c2e3c7673b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.317800956Z" level=info msg="Removed pod sandbox: 26509779bfca58343073fc92d3ee21f0892aee0e2b62f5bd866cf69b3fb33581" id=cc8af5f2-1fce-4497-94a7-a9c2e3c7673b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.318482853Z" level=info msg="Stopping pod sandbox: 97ddbf1fa54ff771ff8dcd50f8f6d3dca6dd4680c39e58099019b7a80ffb172a" id=e93cbdae-8681-4dd1-9b63-913bb506abf5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.318529902Z" level=info msg="Stopped pod sandbox (already stopped): 97ddbf1fa54ff771ff8dcd50f8f6d3dca6dd4680c39e58099019b7a80ffb172a" id=e93cbdae-8681-4dd1-9b63-913bb506abf5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.318869255Z" level=info msg="Removing pod sandbox: 97ddbf1fa54ff771ff8dcd50f8f6d3dca6dd4680c39e58099019b7a80ffb172a" id=6828accc-73a9-4658-aaad-2c76c28d5314 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.322098313Z" level=info msg="Removed pod sandbox: 97ddbf1fa54ff771ff8dcd50f8f6d3dca6dd4680c39e58099019b7a80ffb172a" id=6828accc-73a9-4658-aaad-2c76c28d5314 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.322536933Z" level=info msg="Stopping pod sandbox: 22b653f5820da55f63c11b1157a977fd29f2d0e8ab4ace104a7d323107ebd470" id=a735bae7-5771-40a9-94d3-680d93ee79da name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.322662851Z" level=info msg="Stopped pod sandbox (already stopped): 22b653f5820da55f63c11b1157a977fd29f2d0e8ab4ace104a7d323107ebd470" id=a735bae7-5771-40a9-94d3-680d93ee79da name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.322997132Z" level=info msg="Removing pod sandbox: 22b653f5820da55f63c11b1157a977fd29f2d0e8ab4ace104a7d323107ebd470" id=3b0d0faa-0bce-467c-9439-3bba00fd49ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:25:44 functional-014829 crio[3532]: time="2025-11-29T09:25:44.326620272Z" level=info msg="Removed pod sandbox: 22b653f5820da55f63c11b1157a977fd29f2d0e8ab4ace104a7d323107ebd470" id=3b0d0faa-0bce-467c-9439-3bba00fd49ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 09:25:51 functional-014829 crio[3532]: time="2025-11-29T09:25:51.339459904Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=876e9900-cb7b-40cf-b2ea-eefbb9cc580c name=/runtime.v1.ImageService/PullImage
	Nov 29 09:25:56 functional-014829 crio[3532]: time="2025-11-29T09:25:56.34116787Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1fbda6d1-affc-435c-a579-fae3b0d84cb2 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:26:17 functional-014829 crio[3532]: time="2025-11-29T09:26:17.338861315Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d3709856-a9c9-4938-bf5c-643d62e77653 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:26:45 functional-014829 crio[3532]: time="2025-11-29T09:26:45.338976544Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c21c700d-a50b-41b7-abef-abb1a237423b name=/runtime.v1.ImageService/PullImage
	Nov 29 09:27:09 functional-014829 crio[3532]: time="2025-11-29T09:27:09.339497514Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4d4a58c7-2fb9-44c7-afe1-4dc0aea8da76 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:28:15 functional-014829 crio[3532]: time="2025-11-29T09:28:15.339256512Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e1558d05-e007-4457-8b97-c125842a130b name=/runtime.v1.ImageService/PullImage
	Nov 29 09:28:33 functional-014829 crio[3532]: time="2025-11-29T09:28:33.339023378Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=92978eb1-5b43-47d6-8cc9-fb4ef38353f9 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:31:05 functional-014829 crio[3532]: time="2025-11-29T09:31:05.338961457Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=970cc6e3-497d-4d01-86fc-dcdf1261ef95 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:31:15 functional-014829 crio[3532]: time="2025-11-29T09:31:15.339039187Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5d054254-925f-4161-a9ed-30374c55bda0 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	53df8e84113a6       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   8d8f7d36c7030       sp-pod                                      default
	7c2c0f7ff15a4       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   7d9248f0bda8c       nginx-svc                                   default
	d223651132e73       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   1e1df74f6d128       kube-proxy-n4twp                            kube-system
	ccae1d5fc3649       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   cc55e67d7306c       coredns-66bc5c9577-d4pzs                    kube-system
	3121da00e9f90       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   b4df25a927ea2       storage-provisioner                         kube-system
	dedf367124068       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   9c4f9cf7b37e5       kindnet-56krd                               kube-system
	4b89d3164194b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   7aca1aa1ba6dc       kube-apiserver-functional-014829            kube-system
	7bf3cd6ab5430       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   ee36c362d3988       kube-controller-manager-functional-014829   kube-system
	4b2f421c3955e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   60020adfc49a6       kube-scheduler-functional-014829            kube-system
	d6098686bf0fd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   58b60ce404c69       etcd-functional-014829                      kube-system
	03a4e729e9c3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   cc55e67d7306c       coredns-66bc5c9577-d4pzs                    kube-system
	ce9301f2b209f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   b4df25a927ea2       storage-provisioner                         kube-system
	37cd1c2d49a71       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   9c4f9cf7b37e5       kindnet-56krd                               kube-system
	2d1065d29e993       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   60020adfc49a6       kube-scheduler-functional-014829            kube-system
	877a4960942a0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   1e1df74f6d128       kube-proxy-n4twp                            kube-system
	529736bb6e8b8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   ee36c362d3988       kube-controller-manager-functional-014829   kube-system
	eff0cc2d532cb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   58b60ce404c69       etcd-functional-014829                      kube-system
	
	
	==> coredns [03a4e729e9c3e4e86b98aa6941291ada24e6cd8c1dead65536bc1086e7e52090] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51479 - 52800 "HINFO IN 3056831178689493987.2803819359439153401. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031035831s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ccae1d5fc36495c70379552ec50a73af8f6518e3599eb3d88cae62e0cba6ae1a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59910 - 41296 "HINFO IN 4653980532398292490.357782987773921169. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004289135s
	
	
	==> describe nodes <==
	Name:               functional-014829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-014829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=functional-014829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_23_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:23:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-014829
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:35:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:34:52 +0000   Sat, 29 Nov 2025 09:23:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:34:52 +0000   Sat, 29 Nov 2025 09:23:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:34:52 +0000   Sat, 29 Nov 2025 09:23:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:34:52 +0000   Sat, 29 Nov 2025 09:23:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-014829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                8ef5922d-3e0d-4691-9180-89f3c239202b
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-h44lv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-sjv8d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-d4pzs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-014829                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-56krd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-014829             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-014829    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-n4twp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-014829             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-014829 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-014829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-014829 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-014829 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-014829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-014829 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-014829 event: Registered Node functional-014829 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-014829 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-014829 event: Registered Node functional-014829 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-014829 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-014829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-014829 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-014829 event: Registered Node functional-014829 in Controller
	
	
	==> dmesg <==
	[Nov29 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015149] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507546] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034739] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.833095] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +4.564053] kauditd_printk_skb: 35 callbacks suppressed
	[Nov29 08:31] hrtimer: interrupt took 8840027 ns
	[Nov29 09:14] kauditd_printk_skb: 8 callbacks suppressed
	[Nov29 09:16] overlayfs: idmapped layers are currently not supported
	[  +0.067811] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov29 09:22] overlayfs: idmapped layers are currently not supported
	[Nov29 09:23] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d6098686bf0fd55903474b155722f9ba9d0c014a78239f657b9cf8ab9a01fb3f] <==
	{"level":"warn","ts":"2025-11-29T09:24:48.378827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.398254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.410966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.449356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.451211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.472272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.511202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.522333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.539310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.556149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.573405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.594985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.610915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.623282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.652742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.687275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.713277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.736701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.762475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.782799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.798709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:48.855948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42022","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:34:47.263748Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1124}
	{"level":"info","ts":"2025-11-29T09:34:47.287435Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1124,"took":"23.305571ms","hash":3920385986,"current-db-size-bytes":3301376,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1470464,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-29T09:34:47.287490Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3920385986,"revision":1124,"compact-revision":-1}
	
	
	==> etcd [eff0cc2d532cbb2fa06f918890a448b707058dead066d71d63a0e098aa0ff37e] <==
	{"level":"warn","ts":"2025-11-29T09:24:12.962819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:12.979064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:12.999011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:13.030213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:13.047094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:13.067101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:24:13.177727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58436","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:24:33.950490Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-29T09:24:33.950541Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-014829","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-29T09:24:33.950645Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T09:24:34.100714Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T09:24:34.102182Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:24:34.102260Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-29T09:24:34.102355Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-29T09:24:34.102375Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-29T09:24:34.102630Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T09:24:34.102662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T09:24:34.102719Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T09:24:34.102736Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T09:24:34.102744Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-29T09:24:34.102670Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:24:34.106343Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-29T09:24:34.106432Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:24:34.106462Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-29T09:24:34.106470Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-014829","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:35:23 up  2:17,  0 user,  load average: 0.81, 0.57, 1.56
	Linux functional-014829 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37cd1c2d49a7138aededfa39a7418964ff8f3f1734fa1b891b53304fd1da0efe] <==
	I1129 09:24:10.535247       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:24:10.542539       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1129 09:24:10.542761       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:24:10.546259       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:24:10.546336       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:24:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:24:10.836960       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:24:10.836990       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:24:10.837000       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:24:10.858826       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:24:14.437980       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:24:14.438013       1 metrics.go:72] Registering metrics
	I1129 09:24:14.438087       1 controller.go:711] "Syncing nftables rules"
	I1129 09:24:20.813196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:24:20.813342       1 main.go:301] handling current node
	I1129 09:24:30.815843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:24:30.815876       1 main.go:301] handling current node
	
	
	==> kindnet [dedf367124068f6c61de6aaed28d16636c003ec95312cb7fff2908e051e8679c] <==
	I1129 09:33:21.020341       1 main.go:301] handling current node
	I1129 09:33:31.017363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:33:31.017427       1 main.go:301] handling current node
	I1129 09:33:41.024489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:33:41.024528       1 main.go:301] handling current node
	I1129 09:33:51.021854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:33:51.021967       1 main.go:301] handling current node
	I1129 09:34:01.023777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:34:01.023810       1 main.go:301] handling current node
	I1129 09:34:11.024847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:34:11.024969       1 main.go:301] handling current node
	I1129 09:34:21.020017       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:34:21.020132       1 main.go:301] handling current node
	I1129 09:34:31.017076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:34:31.017112       1 main.go:301] handling current node
	I1129 09:34:41.025385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:34:41.025482       1 main.go:301] handling current node
	I1129 09:34:51.023215       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:34:51.023337       1 main.go:301] handling current node
	I1129 09:35:01.023067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:35:01.023106       1 main.go:301] handling current node
	I1129 09:35:11.025638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:35:11.025671       1 main.go:301] handling current node
	I1129 09:35:21.020909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 09:35:21.020946       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b89d3164194bb9727e864467af897ab6445dd61be69fa614adf850c5f39073f] <==
	I1129 09:24:49.958170       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:24:50.001453       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:24:50.001557       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:24:50.001733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:24:50.002137       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 09:24:50.005809       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:24:50.011970       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:24:50.017130       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:24:50.386634       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:24:50.621559       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:24:51.853320       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:24:51.997888       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:24:52.072232       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:24:52.080050       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:24:53.262678       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:24:53.301012       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:24:53.594854       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:25:05.730389       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.40.252"}
	I1129 09:25:11.867762       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.8.207"}
	I1129 09:25:21.586690       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.80.248"}
	E1129 09:25:28.641228       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47834: use of closed network connection
	E1129 09:25:29.604171       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1129 09:25:36.911605       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45872: use of closed network connection
	I1129 09:25:37.127010       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.74.53"}
	I1129 09:34:49.885651       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [529736bb6e8b851a16bad1aed295fe21f77cd931176805dc28eb3cf63538d52b] <==
	I1129 09:24:17.628473       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:24:17.628496       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:24:17.628509       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:24:17.628521       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:24:17.628459       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:24:17.637592       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:24:17.628830       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:24:17.637061       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:24:17.637084       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:24:17.637103       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:24:17.637129       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:24:17.644150       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:24:17.644270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:24:17.644350       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:24:17.644486       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:24:17.649009       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:24:17.650509       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:24:17.657135       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:24:17.668516       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:24:17.674618       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:24:17.678517       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:24:17.679283       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:24:17.679411       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:24:17.679451       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:24:17.679482       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [7bf3cd6ab5430ef1d0659e68b36390e081f7037848302b438a9a575d4e291336] <==
	I1129 09:24:53.208863       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:24:53.211355       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:24:53.214119       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:24:53.217409       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:24:53.224239       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:24:53.225372       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:24:53.232682       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:24:53.236720       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:24:53.236870       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:24:53.240705       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:24:53.240775       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:24:53.243532       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:24:53.243824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:24:53.243842       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:24:53.243848       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:24:53.244622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:24:53.247084       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:24:53.249881       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:24:53.252281       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:24:53.256952       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:24:53.261497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:24:53.263121       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:24:53.263154       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:24:53.263690       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:24:53.269892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [877a4960942a0b02d1c53c5d5daa693193b2f21386d72be8cb459f25084e7cbe] <==
	I1129 09:24:12.028591       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:24:13.623121       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:24:14.447284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:24:14.447432       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 09:24:14.447511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:24:14.712364       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:24:14.712487       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:24:14.782780       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:24:14.783148       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:24:14.783339       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:24:14.784563       1 config.go:200] "Starting service config controller"
	I1129 09:24:14.784629       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:24:14.784670       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:24:14.784709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:24:14.784752       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:24:14.784781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:24:14.785475       1 config.go:309] "Starting node config controller"
	I1129 09:24:14.785536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:24:14.785566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:24:14.888993       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:24:14.889059       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:24:14.889093       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d223651132e739d49b78b967ec53c6038bbbea02a8c81e2f1afd79fff72f9bc5] <==
	I1129 09:24:50.805084       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:24:50.931029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:24:51.034172       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:24:51.034219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 09:24:51.034358       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:24:51.059436       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:24:51.059487       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:24:51.063656       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:24:51.063963       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:24:51.063986       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:24:51.065860       1 config.go:200] "Starting service config controller"
	I1129 09:24:51.065882       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:24:51.065901       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:24:51.065905       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:24:51.065917       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:24:51.065921       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:24:51.066696       1 config.go:309] "Starting node config controller"
	I1129 09:24:51.066716       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:24:51.066723       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:24:51.166306       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:24:51.166355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:24:51.166315       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2d1065d29e9930f5eddcd282046520c04d8e1ccd4ff0fe1a75b7d1d62d156938] <==
	I1129 09:24:13.495941       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:24:14.884902       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:24:14.885047       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:24:14.896433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:24:14.896522       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 09:24:14.896558       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 09:24:14.896585       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:24:14.897980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:24:14.898002       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:24:14.898040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:24:14.898068       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:24:15.002737       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1129 09:24:15.002959       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:24:15.003302       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:24:33.954176       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1129 09:24:33.954201       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1129 09:24:33.954223       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1129 09:24:33.954263       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:24:33.954284       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:24:33.954300       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1129 09:24:33.954642       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1129 09:24:33.954673       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4b2f421c3955e7092db48da4f7a106771cd459ee78c33a8ceaca44e070e4178f] <==
	I1129 09:24:46.521025       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:24:50.477057       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:24:50.477773       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:24:50.491152       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:24:50.491313       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 09:24:50.491372       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 09:24:50.491439       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:24:50.492495       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:24:50.492569       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:24:50.492614       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:24:50.492674       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:24:50.591692       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1129 09:24:50.595410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:24:50.599246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:32:35 functional-014829 kubelet[3860]: E1129 09:32:35.338581    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:32:46 functional-014829 kubelet[3860]: E1129 09:32:46.339634    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:32:47 functional-014829 kubelet[3860]: E1129 09:32:47.338665    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:32:58 functional-014829 kubelet[3860]: E1129 09:32:58.339729    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:33:01 functional-014829 kubelet[3860]: E1129 09:33:01.338937    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:33:13 functional-014829 kubelet[3860]: E1129 09:33:13.338671    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:33:16 functional-014829 kubelet[3860]: E1129 09:33:16.339015    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:33:24 functional-014829 kubelet[3860]: E1129 09:33:24.339044    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:33:29 functional-014829 kubelet[3860]: E1129 09:33:29.338820    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:33:39 functional-014829 kubelet[3860]: E1129 09:33:39.339243    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:33:43 functional-014829 kubelet[3860]: E1129 09:33:43.338556    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:33:52 functional-014829 kubelet[3860]: E1129 09:33:52.339665    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:33:57 functional-014829 kubelet[3860]: E1129 09:33:57.339265    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:34:05 functional-014829 kubelet[3860]: E1129 09:34:05.339256    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:34:11 functional-014829 kubelet[3860]: E1129 09:34:11.338970    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:34:19 functional-014829 kubelet[3860]: E1129 09:34:19.339075    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:34:25 functional-014829 kubelet[3860]: E1129 09:34:25.338812    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:34:31 functional-014829 kubelet[3860]: E1129 09:34:31.338314    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:34:36 functional-014829 kubelet[3860]: E1129 09:34:36.339788    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:34:43 functional-014829 kubelet[3860]: E1129 09:34:43.339172    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:34:47 functional-014829 kubelet[3860]: E1129 09:34:47.338515    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:34:56 functional-014829 kubelet[3860]: E1129 09:34:56.339515    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:35:01 functional-014829 kubelet[3860]: E1129 09:35:01.338420    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	Nov 29 09:35:11 functional-014829 kubelet[3860]: E1129 09:35:11.339311    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sjv8d" podUID="766b0629-9f33-42de-95ae-98ef95269a7f"
	Nov 29 09:35:12 functional-014829 kubelet[3860]: E1129 09:35:12.339271    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h44lv" podUID="76d01e64-a4ec-4d8e-9a9c-7bb91202df49"
	
	
	==> storage-provisioner [3121da00e9f901678c6f5a7bb5b3eecce96297b291a194d8f094c168888380ba] <==
	W1129 09:34:58.824556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:00.827853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:00.832282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:02.835115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:02.841183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:04.845098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:04.851974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:06.854908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:06.859348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:08.862869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:08.867358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:10.870689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:10.878196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:12.881680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:12.888445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:14.891573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:14.896230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:16.899566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:16.904187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:18.907166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:18.913831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:20.916883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:20.921423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:22.925120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:35:22.930040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ce9301f2b209f99a8739dab2f290a480232e4a2a065c5a4142e221271d0e6ac1] <==
	I1129 09:24:11.787269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:24:14.349455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:24:14.349503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:24:14.480917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:17.936620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:22.197115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:25.795605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:28.849516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:31.872439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:31.880368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:24:31.880594       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:24:31.880814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-014829_41ae8899-5f6e-4f12-8dd6-b3d74a6deca8!
	I1129 09:24:31.881778       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0fd3478d-aae7-4a15-984c-897c842851a5", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-014829_41ae8899-5f6e-4f12-8dd6-b3d74a6deca8 became leader
	W1129 09:24:31.893061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:31.897476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:24:31.981047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-014829_41ae8899-5f6e-4f12-8dd6-b3d74a6deca8!
	W1129 09:24:33.900835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:24:33.907023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-014829 -n functional-014829
helpers_test.go:269: (dbg) Run:  kubectl --context functional-014829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-h44lv hello-node-connect-7d85dfc575-sjv8d
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-014829 describe pod hello-node-75c85bcc94-h44lv hello-node-connect-7d85dfc575-sjv8d
helpers_test.go:290: (dbg) kubectl --context functional-014829 describe pod hello-node-75c85bcc94-h44lv hello-node-connect-7d85dfc575-sjv8d:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-h44lv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014829/192.168.49.2
	Start Time:       Sat, 29 Nov 2025 09:25:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qppqf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qppqf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-h44lv to functional-014829
	  Normal   Pulling    6m51s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m51s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m51s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m39s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m39s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-sjv8d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-014829/192.168.49.2
	Start Time:       Sat, 29 Nov 2025 09:25:21 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qsp4h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qsp4h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-sjv8d to functional-014829
	  Normal   Pulling    7m9s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x43 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-014829 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-014829 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-h44lv" [76d01e64-a4ec-4d8e-9a9c-7bb91202df49] Pending
helpers_test.go:352: "hello-node-75c85bcc94-h44lv" [76d01e64-a4ec-4d8e-9a9c-7bb91202df49] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1129 09:26:07.874496  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:28:24.013947  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:28:51.716397  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:33:24.013186  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-014829 -n functional-014829
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-29 09:35:37.606234315 +0000 UTC m=+1223.262526724
functional_test.go:1460: (dbg) Run:  kubectl --context functional-014829 describe po hello-node-75c85bcc94-h44lv -n default
functional_test.go:1460: (dbg) kubectl --context functional-014829 describe po hello-node-75c85bcc94-h44lv -n default:
Name:             hello-node-75c85bcc94-h44lv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-014829/192.168.49.2
Start Time:       Sat, 29 Nov 2025 09:25:37 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qppqf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qppqf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-h44lv to functional-014829
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-014829 logs hello-node-75c85bcc94-h44lv -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-014829 logs hello-node-75c85bcc94-h44lv -n default: exit status 1 (124.095885ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-h44lv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-014829 logs hello-node-75c85bcc94-h44lv -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 service --namespace=default --https --url hello-node: exit status 115 (501.198387ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30582
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-014829 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 service hello-node --url --format={{.IP}}: exit status 115 (606.073585ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-014829 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 service hello-node --url: exit status 115 (467.773286ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30582
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-014829 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30582
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image load --daemon kicbase/echo-server:functional-014829 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 image load --daemon kicbase/echo-server:functional-014829 --alsologtostderr: (2.781928433s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-014829" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image load --daemon kicbase/echo-server:functional-014829 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-014829" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-014829
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image load --daemon kicbase/echo-server:functional-014829 --alsologtostderr
2025/11/29 09:35:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-014829" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image save kicbase/echo-server:functional-014829 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1129 09:35:52.449340  330057 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:35:52.449508  330057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:35:52.449514  330057 out.go:374] Setting ErrFile to fd 2...
	I1129 09:35:52.449518  330057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:35:52.449767  330057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:35:52.450505  330057 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:35:52.450618  330057 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:35:52.451609  330057 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
	I1129 09:35:52.479011  330057 ssh_runner.go:195] Run: systemctl --version
	I1129 09:35:52.479199  330057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
	I1129 09:35:52.500798  330057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
	I1129 09:35:52.620819  330057 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1129 09:35:52.620886  330057 cache_images.go:255] Failed to load cached images for "functional-014829": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1129 09:35:52.620905  330057 cache_images.go:267] failed pushing to: functional-014829

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-014829
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image save --daemon kicbase/echo-server:functional-014829 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-014829
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-014829: exit status 1 (19.768233ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-014829

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-014829

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-434377 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-434377 --output=json --user=testUser: exit status 80 (1.642184758s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"01dfb0d2-12e5-4e3a-be5e-93de488c8d6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-434377 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"720e0d09-aa63-4814-8e98-1dfab4fab106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-29T09:48:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"db34e19b-192c-46ed-810b-3a0282ab0245","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-434377 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.94s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-434377 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-434377 --output=json --user=testUser: exit status 80 (1.937123856s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"16b0731d-8bba-4a83-99a7-c9e5d7f92ee1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-434377 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7659b1c1-fe8f-4daa-946f-cfe98d6a2d7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-29T09:48:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"bceca113-26b8-4103-b9f7-622fc1d536d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-434377 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.94s)

                                                
                                    
x
+
TestPause/serial/Pause (6.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-377932 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-377932 --alsologtostderr -v=5: exit status 80 (1.920041968s)

                                                
                                                
-- stdout --
	* Pausing node pause-377932 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 10:14:37.428053  473516 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:14:37.428931  473516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:14:37.428966  473516 out.go:374] Setting ErrFile to fd 2...
	I1129 10:14:37.428986  473516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:14:37.429285  473516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:14:37.429654  473516 out.go:368] Setting JSON to false
	I1129 10:14:37.429702  473516 mustload.go:66] Loading cluster: pause-377932
	I1129 10:14:37.430248  473516 config.go:182] Loaded profile config "pause-377932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:14:37.430780  473516 cli_runner.go:164] Run: docker container inspect pause-377932 --format={{.State.Status}}
	I1129 10:14:37.452634  473516 host.go:66] Checking if "pause-377932" exists ...
	I1129 10:14:37.452962  473516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:14:37.550678  473516 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:14:37.537007952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:14:37.551335  473516 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-377932 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 10:14:37.554358  473516 out.go:179] * Pausing node pause-377932 ... 
	I1129 10:14:37.558156  473516 host.go:66] Checking if "pause-377932" exists ...
	I1129 10:14:37.558564  473516 ssh_runner.go:195] Run: systemctl --version
	I1129 10:14:37.558609  473516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:37.582191  473516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:37.697540  473516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:14:37.711342  473516 pause.go:52] kubelet running: true
	I1129 10:14:37.711411  473516 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:14:37.997461  473516 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:14:37.997557  473516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:14:38.071196  473516 cri.go:89] found id: "90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260"
	I1129 10:14:38.071221  473516 cri.go:89] found id: "fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad"
	I1129 10:14:38.071226  473516 cri.go:89] found id: "dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52"
	I1129 10:14:38.071230  473516 cri.go:89] found id: "d4f345bc31ce087d3d5bebc97a5baf10e78b82949811116ad88e27b783ee07a5"
	I1129 10:14:38.071234  473516 cri.go:89] found id: "de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f"
	I1129 10:14:38.071247  473516 cri.go:89] found id: "8a6d93399096ef757e9b847c20ed94f2b6ccecc303206436c36a708244a27ce7"
	I1129 10:14:38.071251  473516 cri.go:89] found id: "78219c83d385108b3bcc98070a36edbf0ddd7d6f73bb01e36ceca4997e32bb82"
	I1129 10:14:38.071254  473516 cri.go:89] found id: "863c9eb570a8eedbd9cc558c29553aa219ca81df54cf5bacbb12a0581f16a6e2"
	I1129 10:14:38.071257  473516 cri.go:89] found id: "22b7f675021b00c8e1d0af5fda6f17e6544084f6156ef90fc1fd28fee7ce6893"
	I1129 10:14:38.071263  473516 cri.go:89] found id: "212c65a4a208be355d66db67ca2c344f88277834c18676d7489f207b156349de"
	I1129 10:14:38.071267  473516 cri.go:89] found id: "7a3703bc7dce8c8ea16c9ac2c0b27d5dd1a62a540c2de8856ca64adf38f3253a"
	I1129 10:14:38.071271  473516 cri.go:89] found id: "d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e"
	I1129 10:14:38.071274  473516 cri.go:89] found id: "b95158c79f68b8cd69c81d30a1659ddc3402d6bec51b5fcdb3760a10ac2edba7"
	I1129 10:14:38.071276  473516 cri.go:89] found id: "7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e"
	I1129 10:14:38.071280  473516 cri.go:89] found id: ""
	I1129 10:14:38.071369  473516 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:14:38.083497  473516 retry.go:31] will retry after 186.751891ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:14:38Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:14:38.270923  473516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:14:38.284371  473516 pause.go:52] kubelet running: false
	I1129 10:14:38.284444  473516 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:14:38.426920  473516 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:14:38.427035  473516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:14:38.495811  473516 cri.go:89] found id: "90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260"
	I1129 10:14:38.495832  473516 cri.go:89] found id: "fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad"
	I1129 10:14:38.495838  473516 cri.go:89] found id: "dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52"
	I1129 10:14:38.495842  473516 cri.go:89] found id: "d4f345bc31ce087d3d5bebc97a5baf10e78b82949811116ad88e27b783ee07a5"
	I1129 10:14:38.495845  473516 cri.go:89] found id: "de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f"
	I1129 10:14:38.495849  473516 cri.go:89] found id: "8a6d93399096ef757e9b847c20ed94f2b6ccecc303206436c36a708244a27ce7"
	I1129 10:14:38.495853  473516 cri.go:89] found id: "78219c83d385108b3bcc98070a36edbf0ddd7d6f73bb01e36ceca4997e32bb82"
	I1129 10:14:38.495861  473516 cri.go:89] found id: "863c9eb570a8eedbd9cc558c29553aa219ca81df54cf5bacbb12a0581f16a6e2"
	I1129 10:14:38.495865  473516 cri.go:89] found id: "22b7f675021b00c8e1d0af5fda6f17e6544084f6156ef90fc1fd28fee7ce6893"
	I1129 10:14:38.495872  473516 cri.go:89] found id: "212c65a4a208be355d66db67ca2c344f88277834c18676d7489f207b156349de"
	I1129 10:14:38.495879  473516 cri.go:89] found id: "7a3703bc7dce8c8ea16c9ac2c0b27d5dd1a62a540c2de8856ca64adf38f3253a"
	I1129 10:14:38.495882  473516 cri.go:89] found id: "d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e"
	I1129 10:14:38.495886  473516 cri.go:89] found id: "b95158c79f68b8cd69c81d30a1659ddc3402d6bec51b5fcdb3760a10ac2edba7"
	I1129 10:14:38.495889  473516 cri.go:89] found id: "7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e"
	I1129 10:14:38.495892  473516 cri.go:89] found id: ""
	I1129 10:14:38.495947  473516 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:14:38.507523  473516 retry.go:31] will retry after 497.840093ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:14:38Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:14:39.005916  473516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:14:39.019873  473516 pause.go:52] kubelet running: false
	I1129 10:14:39.019944  473516 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:14:39.174818  473516 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:14:39.174953  473516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:14:39.241055  473516 cri.go:89] found id: "90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260"
	I1129 10:14:39.241081  473516 cri.go:89] found id: "fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad"
	I1129 10:14:39.241087  473516 cri.go:89] found id: "dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52"
	I1129 10:14:39.241091  473516 cri.go:89] found id: "d4f345bc31ce087d3d5bebc97a5baf10e78b82949811116ad88e27b783ee07a5"
	I1129 10:14:39.241094  473516 cri.go:89] found id: "de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f"
	I1129 10:14:39.241098  473516 cri.go:89] found id: "8a6d93399096ef757e9b847c20ed94f2b6ccecc303206436c36a708244a27ce7"
	I1129 10:14:39.241101  473516 cri.go:89] found id: "78219c83d385108b3bcc98070a36edbf0ddd7d6f73bb01e36ceca4997e32bb82"
	I1129 10:14:39.241104  473516 cri.go:89] found id: "863c9eb570a8eedbd9cc558c29553aa219ca81df54cf5bacbb12a0581f16a6e2"
	I1129 10:14:39.241107  473516 cri.go:89] found id: "22b7f675021b00c8e1d0af5fda6f17e6544084f6156ef90fc1fd28fee7ce6893"
	I1129 10:14:39.241113  473516 cri.go:89] found id: "212c65a4a208be355d66db67ca2c344f88277834c18676d7489f207b156349de"
	I1129 10:14:39.241116  473516 cri.go:89] found id: "7a3703bc7dce8c8ea16c9ac2c0b27d5dd1a62a540c2de8856ca64adf38f3253a"
	I1129 10:14:39.241119  473516 cri.go:89] found id: "d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e"
	I1129 10:14:39.241122  473516 cri.go:89] found id: "b95158c79f68b8cd69c81d30a1659ddc3402d6bec51b5fcdb3760a10ac2edba7"
	I1129 10:14:39.241128  473516 cri.go:89] found id: "7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e"
	I1129 10:14:39.241134  473516 cri.go:89] found id: ""
	I1129 10:14:39.241186  473516 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:14:39.255797  473516 out.go:203] 
	W1129 10:14:39.258725  473516 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:14:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:14:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 10:14:39.258756  473516 out.go:285] * 
	* 
	W1129 10:14:39.266049  473516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 10:14:39.269168  473516 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-377932 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-377932
helpers_test.go:243: (dbg) docker inspect pause-377932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926",
	        "Created": "2025-11-29T10:12:56.64444144Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 468060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:12:56.705483694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/hostname",
	        "HostsPath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/hosts",
	        "LogPath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926-json.log",
	        "Name": "/pause-377932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-377932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-377932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926",
	                "LowerDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-377932",
	                "Source": "/var/lib/docker/volumes/pause-377932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-377932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-377932",
	                "name.minikube.sigs.k8s.io": "pause-377932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da248243934a9d7b24553e5e68da3f5ecf02a9d578c3598d5802d7662405ed8e",
	            "SandboxKey": "/var/run/docker/netns/da248243934a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-377932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:11:02:cb:7c:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37a5cdec6122efe72d1ddafdb31a4be1ec5e23b8fef2a3904e2b7ffc60825d9f",
	                    "EndpointID": "2e31f421f8f281334209c17723f2032e256281d84833b0eebafa13d0ecfa44d7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-377932",
	                        "c57b58a3396e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-377932 -n pause-377932
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-377932 -n pause-377932: exit status 2 (345.998498ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-377932 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-377932 logs -n 25: (1.601651117s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:04 UTC │ 29 Nov 25 10:06 UTC │
	│ start   │ -p missing-upgrade-246693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-246693    │ jenkins │ v1.37.0 │ 29 Nov 25 10:05 UTC │ 29 Nov 25 10:05 UTC │
	│ delete  │ -p missing-upgrade-246693                                                                                                                │ missing-upgrade-246693    │ jenkins │ v1.37.0 │ 29 Nov 25 10:05 UTC │ 29 Nov 25 10:05 UTC │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:05 UTC │ 29 Nov 25 10:06 UTC │
	│ stop    │ -p kubernetes-upgrade-510809                                                                                                             │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:06 UTC │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:11 UTC │
	│ delete  │ -p NoKubernetes-399835                                                                                                                   │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:06 UTC │
	│ start   │ -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:06 UTC │
	│ ssh     │ -p NoKubernetes-399835 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │                     │
	│ stop    │ -p NoKubernetes-399835                                                                                                                   │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:07 UTC │
	│ start   │ -p NoKubernetes-399835 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:07 UTC │
	│ ssh     │ -p NoKubernetes-399835 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │                     │
	│ delete  │ -p NoKubernetes-399835                                                                                                                   │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:07 UTC │
	│ start   │ -p stopped-upgrade-467241 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-467241    │ jenkins │ v1.35.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:08 UTC │
	│ stop    │ stopped-upgrade-467241 stop                                                                                                              │ stopped-upgrade-467241    │ jenkins │ v1.35.0 │ 29 Nov 25 10:08 UTC │ 29 Nov 25 10:08 UTC │
	│ start   │ -p stopped-upgrade-467241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-467241    │ jenkins │ v1.37.0 │ 29 Nov 25 10:08 UTC │ 29 Nov 25 10:12 UTC │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:11 UTC │                     │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:11 UTC │ 29 Nov 25 10:11 UTC │
	│ delete  │ -p kubernetes-upgrade-510809                                                                                                             │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:11 UTC │ 29 Nov 25 10:11 UTC │
	│ start   │ -p running-upgrade-493711 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-493711    │ jenkins │ v1.35.0 │ 29 Nov 25 10:11 UTC │ 29 Nov 25 10:12 UTC │
	│ start   │ -p running-upgrade-493711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-493711    │ jenkins │ v1.37.0 │ 29 Nov 25 10:12 UTC │                     │
	│ delete  │ -p stopped-upgrade-467241                                                                                                                │ stopped-upgrade-467241    │ jenkins │ v1.37.0 │ 29 Nov 25 10:12 UTC │ 29 Nov 25 10:12 UTC │
	│ start   │ -p pause-377932 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-377932              │ jenkins │ v1.37.0 │ 29 Nov 25 10:12 UTC │ 29 Nov 25 10:14 UTC │
	│ start   │ -p pause-377932 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-377932              │ jenkins │ v1.37.0 │ 29 Nov 25 10:14 UTC │ 29 Nov 25 10:14 UTC │
	│ pause   │ -p pause-377932 --alsologtostderr -v=5                                                                                                   │ pause-377932              │ jenkins │ v1.37.0 │ 29 Nov 25 10:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:14:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:14:10.207255  471543 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:14:10.207433  471543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:14:10.207463  471543 out.go:374] Setting ErrFile to fd 2...
	I1129 10:14:10.207484  471543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:14:10.207748  471543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:14:10.208124  471543 out.go:368] Setting JSON to false
	I1129 10:14:10.209140  471543 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10600,"bootTime":1764400651,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:14:10.209243  471543 start.go:143] virtualization:  
	I1129 10:14:10.215134  471543 out.go:179] * [pause-377932] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:14:10.218360  471543 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:14:10.218450  471543 notify.go:221] Checking for updates...
	I1129 10:14:10.224508  471543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:14:10.227551  471543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:14:10.230533  471543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:14:10.233450  471543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:14:10.236263  471543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:14:10.239761  471543 config.go:182] Loaded profile config "pause-377932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:14:10.240650  471543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:14:10.263899  471543 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:14:10.264039  471543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:14:10.331416  471543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:14:10.319867528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:14:10.331530  471543 docker.go:319] overlay module found
	I1129 10:14:10.334725  471543 out.go:179] * Using the docker driver based on existing profile
	I1129 10:14:10.337521  471543 start.go:309] selected driver: docker
	I1129 10:14:10.337537  471543 start.go:927] validating driver "docker" against &{Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:14:10.337683  471543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:14:10.337793  471543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:14:10.400110  471543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:14:10.39071891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:14:10.400511  471543 cni.go:84] Creating CNI manager for ""
	I1129 10:14:10.400579  471543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:14:10.400631  471543 start.go:353] cluster config:
	{Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:14:10.405538  471543 out.go:179] * Starting "pause-377932" primary control-plane node in "pause-377932" cluster
	I1129 10:14:10.408543  471543 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:14:10.411584  471543 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:14:10.414670  471543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:14:10.414967  471543 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:14:10.415010  471543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:14:10.415033  471543 cache.go:65] Caching tarball of preloaded images
	I1129 10:14:10.415097  471543 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:14:10.415111  471543 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:14:10.415253  471543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/config.json ...
	I1129 10:14:10.436486  471543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:14:10.436509  471543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:14:10.436525  471543 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:14:10.436559  471543 start.go:360] acquireMachinesLock for pause-377932: {Name:mkfed25658d78d0770cd24f56da636a13fb6ca68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:14:10.436636  471543 start.go:364] duration metric: took 45.596µs to acquireMachinesLock for "pause-377932"
	I1129 10:14:10.436664  471543 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:14:10.436673  471543 fix.go:54] fixHost starting: 
	I1129 10:14:10.436962  471543 cli_runner.go:164] Run: docker container inspect pause-377932 --format={{.State.Status}}
	I1129 10:14:10.454556  471543 fix.go:112] recreateIfNeeded on pause-377932: state=Running err=<nil>
	W1129 10:14:10.454597  471543 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:14:08.172998  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:37352->192.168.85.2:8443: read: connection reset by peer
	I1129 10:14:08.173053  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:08.173114  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:08.225227  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:08.225300  464519 cri.go:89] found id: "4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	I1129 10:14:08.225312  464519 cri.go:89] found id: ""
	I1129 10:14:08.225320  464519 logs.go:282] 2 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4]
	I1129 10:14:08.225375  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.229340  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.233062  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:08.233162  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:08.273389  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:08.273411  464519 cri.go:89] found id: ""
	I1129 10:14:08.273419  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:08.273474  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.276989  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:08.277061  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:08.313801  464519 cri.go:89] found id: ""
	I1129 10:14:08.313827  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.313836  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:08.313843  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:08.313903  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:08.420617  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:08.420639  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:08.420644  464519 cri.go:89] found id: ""
	I1129 10:14:08.420652  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:08.420706  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.428464  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.434281  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:08.434398  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:08.487054  464519 cri.go:89] found id: ""
	I1129 10:14:08.487079  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.487089  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:08.487096  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:08.487155  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:08.542050  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:08.542099  464519 cri.go:89] found id: ""
	I1129 10:14:08.542109  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:08.542165  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.545804  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:08.545872  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:08.591293  464519 cri.go:89] found id: ""
	I1129 10:14:08.591333  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.591349  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:08.591356  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:08.591412  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:08.636753  464519 cri.go:89] found id: ""
	I1129 10:14:08.636777  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.636785  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:08.636794  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:08.636806  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:08.657322  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:08.657392  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:08.733280  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:08.733302  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:08.733314  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:08.784240  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:08.784271  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:08.827206  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:08.827237  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:08.897456  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:08.897493  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:09.026186  464519 logs.go:123] Gathering logs for kube-apiserver [4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4] ...
	I1129 10:14:09.026227  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	W1129 10:14:09.067642  464519 logs.go:130] failed kube-apiserver [4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4": Process exited with status 1
	stdout:
	
	stderr:
	E1129 10:14:09.064368    4179 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist" containerID="4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	time="2025-11-29T10:14:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1129 10:14:09.064368    4179 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist" containerID="4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	time="2025-11-29T10:14:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist"
	
	** /stderr **
	I1129 10:14:09.067664  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:09.067677  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:09.148112  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:09.148151  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:09.194125  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:09.194152  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:09.232493  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:09.232521  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:10.457822  471543 out.go:252] * Updating the running docker "pause-377932" container ...
	I1129 10:14:10.457871  471543 machine.go:94] provisionDockerMachine start ...
	I1129 10:14:10.457960  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:10.475915  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:10.476250  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:10.476267  471543 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:14:10.629781  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-377932
	
	I1129 10:14:10.629823  471543 ubuntu.go:182] provisioning hostname "pause-377932"
	I1129 10:14:10.629934  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:10.647922  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:10.648240  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:10.648257  471543 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-377932 && echo "pause-377932" | sudo tee /etc/hostname
	I1129 10:14:10.815356  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-377932
	
	I1129 10:14:10.815436  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:10.834858  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:10.835184  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:10.835224  471543 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-377932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-377932/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-377932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:14:10.990408  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:14:10.990449  471543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:14:10.990484  471543 ubuntu.go:190] setting up certificates
	I1129 10:14:10.990498  471543 provision.go:84] configureAuth start
	I1129 10:14:10.990559  471543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-377932
	I1129 10:14:11.020546  471543 provision.go:143] copyHostCerts
	I1129 10:14:11.020629  471543 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:14:11.020648  471543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:14:11.020725  471543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:14:11.020838  471543 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:14:11.020850  471543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:14:11.020878  471543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:14:11.020944  471543 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:14:11.020958  471543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:14:11.020985  471543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:14:11.021044  471543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.pause-377932 san=[127.0.0.1 192.168.76.2 localhost minikube pause-377932]
	I1129 10:14:11.119304  471543 provision.go:177] copyRemoteCerts
	I1129 10:14:11.119384  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:14:11.119435  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:11.138842  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:11.245778  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:14:11.263725  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 10:14:11.281976  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 10:14:11.299696  471543 provision.go:87] duration metric: took 309.172661ms to configureAuth
	I1129 10:14:11.299764  471543 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:14:11.300006  471543 config.go:182] Loaded profile config "pause-377932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:14:11.300148  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:11.319068  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:11.319374  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:11.319393  471543 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:14:11.779615  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:11.780087  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:11.780138  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:11.780196  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:11.818679  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:11.818702  464519 cri.go:89] found id: ""
	I1129 10:14:11.818711  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:11.818767  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.822258  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:11.822337  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:11.861843  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:11.861866  464519 cri.go:89] found id: ""
	I1129 10:14:11.861874  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:11.861935  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.865884  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:11.865963  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:11.903181  464519 cri.go:89] found id: ""
	I1129 10:14:11.903204  464519 logs.go:282] 0 containers: []
	W1129 10:14:11.903212  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:11.903219  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:11.903278  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:11.941785  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:11.941808  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:11.941813  464519 cri.go:89] found id: ""
	I1129 10:14:11.941820  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:11.941883  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.945523  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.948924  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:11.948994  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:11.985791  464519 cri.go:89] found id: ""
	I1129 10:14:11.985822  464519 logs.go:282] 0 containers: []
	W1129 10:14:11.985830  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:11.985838  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:11.985895  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:12.023905  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:12.023936  464519 cri.go:89] found id: ""
	I1129 10:14:12.023946  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:12.024012  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:12.027847  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:12.027955  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:12.064764  464519 cri.go:89] found id: ""
	I1129 10:14:12.064789  464519 logs.go:282] 0 containers: []
	W1129 10:14:12.064797  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:12.064804  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:12.064863  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:12.106266  464519 cri.go:89] found id: ""
	I1129 10:14:12.106303  464519 logs.go:282] 0 containers: []
	W1129 10:14:12.106312  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:12.106327  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:12.106338  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:12.226097  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:12.226139  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:12.271856  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:12.271889  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:12.317079  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:12.317125  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:12.356616  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:12.356654  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:12.404789  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:12.404815  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:12.468578  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:12.468618  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:12.486340  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:12.486368  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:12.557406  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:12.557425  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:12.557437  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:12.661847  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:12.661887  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:15.211328  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:15.211782  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:15.211833  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:15.211905  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:15.248588  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:15.248614  464519 cri.go:89] found id: ""
	I1129 10:14:15.248624  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:15.248695  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.252308  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:15.252383  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:15.291407  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:15.291428  464519 cri.go:89] found id: ""
	I1129 10:14:15.291436  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:15.291497  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.295159  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:15.295229  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:15.331159  464519 cri.go:89] found id: ""
	I1129 10:14:15.331185  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.331193  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:15.331202  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:15.331261  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:15.371539  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:15.371562  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:15.371567  464519 cri.go:89] found id: ""
	I1129 10:14:15.371585  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:15.371642  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.375287  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.380027  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:15.380105  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:15.418069  464519 cri.go:89] found id: ""
	I1129 10:14:15.418113  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.418121  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:15.418127  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:15.418186  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:15.456411  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:15.456438  464519 cri.go:89] found id: ""
	I1129 10:14:15.456447  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:15.456504  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.460082  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:15.460176  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:15.501782  464519 cri.go:89] found id: ""
	I1129 10:14:15.501855  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.501870  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:15.501878  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:15.501937  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:15.539510  464519 cri.go:89] found id: ""
	I1129 10:14:15.539546  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.539554  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:15.539568  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:15.539580  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:16.705832  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:14:16.705852  471543 machine.go:97] duration metric: took 6.247971843s to provisionDockerMachine
	I1129 10:14:16.705862  471543 start.go:293] postStartSetup for "pause-377932" (driver="docker")
	I1129 10:14:16.705873  471543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:14:16.705957  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:14:16.706008  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:16.724977  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:16.830227  471543 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:14:16.833715  471543 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:14:16.833744  471543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:14:16.833755  471543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:14:16.833841  471543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:14:16.833968  471543 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:14:16.834102  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:14:16.842157  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:14:16.861131  471543 start.go:296] duration metric: took 155.252059ms for postStartSetup
	I1129 10:14:16.861314  471543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:14:16.861374  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:16.880256  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:16.983412  471543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:14:16.988619  471543 fix.go:56] duration metric: took 6.551939965s for fixHost
	I1129 10:14:16.988646  471543 start.go:83] releasing machines lock for "pause-377932", held for 6.551998213s
	I1129 10:14:16.988714  471543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-377932
	I1129 10:14:17.006355  471543 ssh_runner.go:195] Run: cat /version.json
	I1129 10:14:17.006448  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:17.006735  471543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:14:17.006810  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:17.027187  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:17.027238  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:17.221187  471543 ssh_runner.go:195] Run: systemctl --version
	I1129 10:14:17.229160  471543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:14:17.272984  471543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:14:17.277349  471543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:14:17.277431  471543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:14:17.285453  471543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:14:17.285476  471543 start.go:496] detecting cgroup driver to use...
	I1129 10:14:17.285508  471543 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:14:17.285558  471543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:14:17.300950  471543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:14:17.314379  471543 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:14:17.314566  471543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:14:17.330673  471543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:14:17.344861  471543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:14:17.494696  471543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:14:17.633763  471543 docker.go:234] disabling docker service ...
	I1129 10:14:17.633838  471543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:14:17.649049  471543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:14:17.662291  471543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:14:17.800222  471543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:14:17.947915  471543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:14:17.961235  471543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:14:17.977117  471543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:14:17.977230  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:17.986701  471543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:14:17.986822  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:17.996512  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.008104  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.019473  471543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:14:18.028475  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.037900  471543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.046759  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.055797  471543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:14:18.063911  471543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:14:18.071633  471543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:14:18.200600  471543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:14:18.421068  471543 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:14:18.421217  471543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:14:18.425281  471543 start.go:564] Will wait 60s for crictl version
	I1129 10:14:18.425379  471543 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.429159  471543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:14:18.459280  471543 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:14:18.459435  471543 ssh_runner.go:195] Run: crio --version
	I1129 10:14:18.490122  471543 ssh_runner.go:195] Run: crio --version
	I1129 10:14:18.522496  471543 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:14:18.525460  471543 cli_runner.go:164] Run: docker network inspect pause-377932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:14:18.543075  471543 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:14:18.547146  471543 kubeadm.go:884] updating cluster {Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:14:18.547300  471543 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:14:18.547361  471543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:14:18.585847  471543 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:14:18.585874  471543 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:14:18.585951  471543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:14:18.617356  471543 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:14:18.617377  471543 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:14:18.617385  471543 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:14:18.617486  471543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-377932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:14:18.617559  471543 ssh_runner.go:195] Run: crio config
	I1129 10:14:18.705913  471543 cni.go:84] Creating CNI manager for ""
	I1129 10:14:18.705986  471543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:14:18.706018  471543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:14:18.706108  471543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-377932 NodeName:pause-377932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:14:18.706277  471543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-377932"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:14:18.706390  471543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:14:18.714440  471543 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:14:18.714558  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:14:18.722204  471543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1129 10:14:18.735893  471543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:14:18.750047  471543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1129 10:14:18.765728  471543 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:14:18.770164  471543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:14:18.955212  471543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:14:18.971931  471543 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932 for IP: 192.168.76.2
	I1129 10:14:18.971951  471543 certs.go:195] generating shared ca certs ...
	I1129 10:14:18.971968  471543 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:14:18.972115  471543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:14:18.972157  471543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:14:18.972164  471543 certs.go:257] generating profile certs ...
	I1129 10:14:18.972246  471543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.key
	I1129 10:14:18.972314  471543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/apiserver.key.83655726
	I1129 10:14:18.972356  471543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/proxy-client.key
	I1129 10:14:18.972461  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:14:18.972490  471543 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:14:18.972498  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:14:18.972527  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:14:18.972554  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:14:18.972579  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:14:18.972620  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:14:18.973191  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:14:18.998957  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:14:19.026131  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:14:19.053720  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:14:19.077937  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 10:14:19.100973  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:14:19.125515  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:14:19.144852  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:14:19.163290  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:14:19.181519  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:14:19.202013  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:14:19.221657  471543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:14:19.237854  471543 ssh_runner.go:195] Run: openssl version
	I1129 10:14:19.244916  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:14:19.254448  471543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:14:19.262275  471543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:14:19.262398  471543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:14:19.315127  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:14:19.324658  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:14:19.334386  471543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:14:19.352484  471543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:14:19.352606  471543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:14:19.498447  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:14:19.532733  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:14:19.546587  471543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:14:19.555020  471543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:14:19.555093  471543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:14:19.696863  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:14:19.725007  471543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:14:19.742602  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:14:19.929703  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:14:20.017091  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:14:20.095234  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:14:20.158829  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:14:20.215645  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:14:20.264149  471543 kubeadm.go:401] StartCluster: {Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:14:20.264281  471543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:14:20.264342  471543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:14:20.303221  471543 cri.go:89] found id: "90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260"
	I1129 10:14:20.303242  471543 cri.go:89] found id: "fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad"
	I1129 10:14:20.303246  471543 cri.go:89] found id: "dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52"
	I1129 10:14:20.303250  471543 cri.go:89] found id: "d4f345bc31ce087d3d5bebc97a5baf10e78b82949811116ad88e27b783ee07a5"
	I1129 10:14:20.303253  471543 cri.go:89] found id: "de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f"
	I1129 10:14:20.303257  471543 cri.go:89] found id: "8a6d93399096ef757e9b847c20ed94f2b6ccecc303206436c36a708244a27ce7"
	I1129 10:14:20.303260  471543 cri.go:89] found id: "78219c83d385108b3bcc98070a36edbf0ddd7d6f73bb01e36ceca4997e32bb82"
	I1129 10:14:20.303263  471543 cri.go:89] found id: "863c9eb570a8eedbd9cc558c29553aa219ca81df54cf5bacbb12a0581f16a6e2"
	I1129 10:14:20.303266  471543 cri.go:89] found id: "22b7f675021b00c8e1d0af5fda6f17e6544084f6156ef90fc1fd28fee7ce6893"
	I1129 10:14:20.303272  471543 cri.go:89] found id: "212c65a4a208be355d66db67ca2c344f88277834c18676d7489f207b156349de"
	I1129 10:14:20.303276  471543 cri.go:89] found id: "7a3703bc7dce8c8ea16c9ac2c0b27d5dd1a62a540c2de8856ca64adf38f3253a"
	I1129 10:14:20.303279  471543 cri.go:89] found id: "d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e"
	I1129 10:14:20.303281  471543 cri.go:89] found id: "b95158c79f68b8cd69c81d30a1659ddc3402d6bec51b5fcdb3760a10ac2edba7"
	I1129 10:14:20.303284  471543 cri.go:89] found id: "7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e"
	I1129 10:14:20.303287  471543 cri.go:89] found id: ""
	I1129 10:14:20.303336  471543 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:14:20.334479  471543 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:14:20Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:14:20.334624  471543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:14:20.346583  471543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:14:20.346658  471543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:14:20.346731  471543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:14:20.359392  471543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:14:20.360130  471543 kubeconfig.go:125] found "pause-377932" server: "https://192.168.76.2:8443"
	I1129 10:14:20.361086  471543 kapi.go:59] client config for pause-377932: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.key", CAFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 10:14:20.361839  471543 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1129 10:14:20.361920  471543 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1129 10:14:20.361941  471543 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1129 10:14:20.361971  471543 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1129 10:14:20.361992  471543 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1129 10:14:20.362448  471543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:14:20.371795  471543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 10:14:20.371878  471543 kubeadm.go:602] duration metric: took 25.200264ms to restartPrimaryControlPlane
	I1129 10:14:20.371904  471543 kubeadm.go:403] duration metric: took 107.763703ms to StartCluster
	I1129 10:14:20.371944  471543 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:14:20.372044  471543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:14:20.373052  471543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:14:20.373354  471543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:14:20.373806  471543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:14:20.373977  471543 config.go:182] Loaded profile config "pause-377932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:14:20.377407  471543 out.go:179] * Verifying Kubernetes components...
	I1129 10:14:20.377513  471543 out.go:179] * Enabled addons: 
	I1129 10:14:15.653084  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:15.653125  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:15.722376  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:15.722393  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:15.722406  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:15.766848  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:15.766878  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:15.846086  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:15.846126  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:15.887481  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:15.887510  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:15.939346  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:15.939375  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:15.957751  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:15.957783  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:16.002665  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:16.002756  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:16.045224  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:16.045300  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:18.607469  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:18.607856  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:18.607909  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:18.607968  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:18.658452  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:18.658479  464519 cri.go:89] found id: ""
	I1129 10:14:18.658495  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:18.658563  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.663167  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:18.663256  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:18.736250  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:18.736282  464519 cri.go:89] found id: ""
	I1129 10:14:18.736291  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:18.736363  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.741561  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:18.741631  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:18.800886  464519 cri.go:89] found id: ""
	I1129 10:14:18.800919  464519 logs.go:282] 0 containers: []
	W1129 10:14:18.800928  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:18.800935  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:18.801004  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:18.871691  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:18.871715  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:18.871720  464519 cri.go:89] found id: ""
	I1129 10:14:18.871728  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:18.871788  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.875696  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.879792  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:18.879863  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:18.924788  464519 cri.go:89] found id: ""
	I1129 10:14:18.924809  464519 logs.go:282] 0 containers: []
	W1129 10:14:18.924817  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:18.924823  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:18.924878  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:18.978007  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:18.978025  464519 cri.go:89] found id: ""
	I1129 10:14:18.978033  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:18.978109  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.983041  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:18.983114  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:19.035307  464519 cri.go:89] found id: ""
	I1129 10:14:19.035329  464519 logs.go:282] 0 containers: []
	W1129 10:14:19.035338  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:19.035344  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:19.035404  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:19.084528  464519 cri.go:89] found id: ""
	I1129 10:14:19.084605  464519 logs.go:282] 0 containers: []
	W1129 10:14:19.084631  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:19.084649  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:19.084664  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:19.243566  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:19.243635  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:19.294884  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:19.294963  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:19.367411  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:19.367482  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:19.442814  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:19.442899  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:19.524659  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:19.524691  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:19.626340  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:19.626409  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:19.655258  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:19.655344  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:19.783414  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:19.783476  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:19.783504  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:19.869879  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:19.869953  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:20.381445  471543 addons.go:530] duration metric: took 7.627627ms for enable addons: enabled=[]
	I1129 10:14:20.381603  471543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:14:20.652550  471543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:14:20.670850  471543 node_ready.go:35] waiting up to 6m0s for node "pause-377932" to be "Ready" ...
	I1129 10:14:24.757175  471543 node_ready.go:49] node "pause-377932" is "Ready"
	I1129 10:14:24.757257  471543 node_ready.go:38] duration metric: took 4.086323201s for node "pause-377932" to be "Ready" ...
	I1129 10:14:24.757285  471543 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:14:24.757372  471543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:14:24.775980  471543 api_server.go:72] duration metric: took 4.402566963s to wait for apiserver process to appear ...
	I1129 10:14:24.776067  471543 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:14:24.776102  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:24.817468  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 10:14:24.817585  471543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 10:14:22.516247  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:22.516621  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:22.516669  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:22.516729  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:22.588017  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:22.588043  464519 cri.go:89] found id: ""
	I1129 10:14:22.588052  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:22.588110  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.592540  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:22.592616  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:22.666679  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:22.666707  464519 cri.go:89] found id: ""
	I1129 10:14:22.666716  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:22.666774  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.678004  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:22.678203  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:22.756247  464519 cri.go:89] found id: ""
	I1129 10:14:22.756277  464519 logs.go:282] 0 containers: []
	W1129 10:14:22.756286  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:22.756293  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:22.756354  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:22.840108  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:22.840132  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:22.840137  464519 cri.go:89] found id: ""
	I1129 10:14:22.840145  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:22.840198  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.844090  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.847688  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:22.847765  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:22.912524  464519 cri.go:89] found id: ""
	I1129 10:14:22.912553  464519 logs.go:282] 0 containers: []
	W1129 10:14:22.912562  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:22.912569  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:22.912680  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:22.970963  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:22.970989  464519 cri.go:89] found id: ""
	I1129 10:14:22.970998  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:22.971054  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.975500  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:22.975620  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:23.022858  464519 cri.go:89] found id: ""
	I1129 10:14:23.022887  464519 logs.go:282] 0 containers: []
	W1129 10:14:23.022895  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:23.022902  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:23.022962  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:23.075279  464519 cri.go:89] found id: ""
	I1129 10:14:23.075316  464519 logs.go:282] 0 containers: []
	W1129 10:14:23.075325  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:23.075358  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:23.075401  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:23.134297  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:23.134331  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:23.256293  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:23.256330  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:23.331802  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:23.331840  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:23.476444  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:23.476488  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:23.603443  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:23.603467  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:23.603480  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:23.682825  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:23.682857  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:23.748553  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:23.748596  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:23.817833  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:23.817866  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:23.906450  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:23.906490  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:25.276173  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:25.288138  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:14:25.288234  471543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:14:25.776525  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:25.784997  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:14:25.785074  471543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:14:26.276787  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:26.284965  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:14:26.286021  471543 api_server.go:141] control plane version: v1.34.1
	I1129 10:14:26.286045  471543 api_server.go:131] duration metric: took 1.509957018s to wait for apiserver health ...
	I1129 10:14:26.286055  471543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:14:26.289574  471543 system_pods.go:59] 7 kube-system pods found
	I1129 10:14:26.289620  471543 system_pods.go:61] "coredns-66bc5c9577-dzxhh" [7f51682b-549d-403a-8927-01e86fc63f8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:14:26.289629  471543 system_pods.go:61] "etcd-pause-377932" [0fcb1c6d-c6d2-48a9-b7e9-e4399f9995ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:14:26.289635  471543 system_pods.go:61] "kindnet-8fr6g" [a1abc657-1bd4-43cc-860c-d23afb2e0cac] Running
	I1129 10:14:26.289641  471543 system_pods.go:61] "kube-apiserver-pause-377932" [4ae1d404-657d-4d83-a47d-1155bab5da50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:14:26.289647  471543 system_pods.go:61] "kube-controller-manager-pause-377932" [b0b1b8f9-4a25-44f5-a305-c25e419a2d50] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:14:26.289658  471543 system_pods.go:61] "kube-proxy-5tg9h" [7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d] Running
	I1129 10:14:26.289664  471543 system_pods.go:61] "kube-scheduler-pause-377932" [4546f2ec-ec43-4e8d-8933-295179d3d385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:14:26.289673  471543 system_pods.go:74] duration metric: took 3.610567ms to wait for pod list to return data ...
	I1129 10:14:26.289684  471543 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:14:26.292221  471543 default_sa.go:45] found service account: "default"
	I1129 10:14:26.292246  471543 default_sa.go:55] duration metric: took 2.555827ms for default service account to be created ...
	I1129 10:14:26.292266  471543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:14:26.295033  471543 system_pods.go:86] 7 kube-system pods found
	I1129 10:14:26.295072  471543 system_pods.go:89] "coredns-66bc5c9577-dzxhh" [7f51682b-549d-403a-8927-01e86fc63f8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:14:26.295081  471543 system_pods.go:89] "etcd-pause-377932" [0fcb1c6d-c6d2-48a9-b7e9-e4399f9995ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:14:26.295091  471543 system_pods.go:89] "kindnet-8fr6g" [a1abc657-1bd4-43cc-860c-d23afb2e0cac] Running
	I1129 10:14:26.295104  471543 system_pods.go:89] "kube-apiserver-pause-377932" [4ae1d404-657d-4d83-a47d-1155bab5da50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:14:26.295121  471543 system_pods.go:89] "kube-controller-manager-pause-377932" [b0b1b8f9-4a25-44f5-a305-c25e419a2d50] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:14:26.295130  471543 system_pods.go:89] "kube-proxy-5tg9h" [7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d] Running
	I1129 10:14:26.295148  471543 system_pods.go:89] "kube-scheduler-pause-377932" [4546f2ec-ec43-4e8d-8933-295179d3d385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:14:26.295159  471543 system_pods.go:126] duration metric: took 2.888089ms to wait for k8s-apps to be running ...
	I1129 10:14:26.295167  471543 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:14:26.295233  471543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:14:26.309354  471543 system_svc.go:56] duration metric: took 14.175325ms WaitForService to wait for kubelet
	I1129 10:14:26.309389  471543 kubeadm.go:587] duration metric: took 5.935981276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:14:26.309424  471543 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:14:26.312326  471543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:14:26.312353  471543 node_conditions.go:123] node cpu capacity is 2
	I1129 10:14:26.312366  471543 node_conditions.go:105] duration metric: took 2.937238ms to run NodePressure ...
	I1129 10:14:26.312379  471543 start.go:242] waiting for startup goroutines ...
	I1129 10:14:26.312387  471543 start.go:247] waiting for cluster config update ...
	I1129 10:14:26.312395  471543 start.go:256] writing updated cluster config ...
	I1129 10:14:26.312707  471543 ssh_runner.go:195] Run: rm -f paused
	I1129 10:14:26.316272  471543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:14:26.316917  471543 kapi.go:59] client config for pause-377932: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.key", CAFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 10:14:26.320131  471543 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzxhh" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:14:28.325273  471543 pod_ready.go:104] pod "coredns-66bc5c9577-dzxhh" is not "Ready", error: <nil>
	I1129 10:14:26.431327  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:26.431774  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:26.431865  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:26.431940  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:26.473480  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:26.473504  464519 cri.go:89] found id: ""
	I1129 10:14:26.473513  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:26.473569  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.478269  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:26.478341  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:26.520325  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:26.520346  464519 cri.go:89] found id: ""
	I1129 10:14:26.520354  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:26.520431  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.523902  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:26.524015  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:26.565113  464519 cri.go:89] found id: ""
	I1129 10:14:26.565145  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.565159  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:26.565166  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:26.565240  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:26.606214  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:26.606235  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:26.606240  464519 cri.go:89] found id: ""
	I1129 10:14:26.606247  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:26.606304  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.609935  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.613645  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:26.613721  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:26.650856  464519 cri.go:89] found id: ""
	I1129 10:14:26.650880  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.650888  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:26.650895  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:26.650959  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:26.692642  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:26.692665  464519 cri.go:89] found id: ""
	I1129 10:14:26.692674  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:26.692757  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.696294  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:26.696407  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:26.733135  464519 cri.go:89] found id: ""
	I1129 10:14:26.733160  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.733169  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:26.733175  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:26.733264  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:26.791510  464519 cri.go:89] found id: ""
	I1129 10:14:26.791535  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.791544  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:26.791559  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:26.791571  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:26.894460  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:26.894496  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:26.945773  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:26.945803  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:26.991247  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:26.991273  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:27.060607  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:27.060646  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:27.107741  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:27.107768  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:27.227222  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:27.227266  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:27.246941  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:27.246967  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:27.336139  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:27.336162  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:27.336178  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:27.390648  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:27.390674  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:29.937339  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:29.937793  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:29.937857  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:29.937931  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:29.975956  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:29.975978  464519 cri.go:89] found id: ""
	I1129 10:14:29.975987  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:29.976043  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:29.979718  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:29.979795  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:30.069364  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:30.069385  464519 cri.go:89] found id: ""
	I1129 10:14:30.069394  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:30.069465  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.078531  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:30.078661  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:30.123654  464519 cri.go:89] found id: ""
	I1129 10:14:30.123679  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.123688  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:30.123695  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:30.123760  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:30.163132  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:30.163160  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:30.163165  464519 cri.go:89] found id: ""
	I1129 10:14:30.163173  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:30.163231  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.167270  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.170950  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:30.171052  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:30.209863  464519 cri.go:89] found id: ""
	I1129 10:14:30.209889  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.209898  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:30.209905  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:30.209965  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:30.249566  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:30.249588  464519 cri.go:89] found id: ""
	I1129 10:14:30.249597  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:30.249655  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.253459  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:30.253533  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:30.292349  464519 cri.go:89] found id: ""
	I1129 10:14:30.292386  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.292395  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:30.292418  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:30.292497  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:30.335757  464519 cri.go:89] found id: ""
	I1129 10:14:30.335787  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.335796  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:30.335811  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:30.335824  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:30.389964  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:30.389997  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:30.430495  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:30.430524  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:30.494462  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:30.494500  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:30.540485  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:30.540516  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:30.326812  471543 pod_ready.go:104] pod "coredns-66bc5c9577-dzxhh" is not "Ready", error: <nil>
	I1129 10:14:31.827280  471543 pod_ready.go:94] pod "coredns-66bc5c9577-dzxhh" is "Ready"
	I1129 10:14:31.827311  471543 pod_ready.go:86] duration metric: took 5.507154696s for pod "coredns-66bc5c9577-dzxhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:31.833398  471543 pod_ready.go:83] waiting for pod "etcd-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:14:33.841078  471543 pod_ready.go:104] pod "etcd-pause-377932" is not "Ready", error: <nil>
	I1129 10:14:34.840294  471543 pod_ready.go:94] pod "etcd-pause-377932" is "Ready"
	I1129 10:14:34.840326  471543 pod_ready.go:86] duration metric: took 3.006891445s for pod "etcd-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:34.842885  471543 pod_ready.go:83] waiting for pod "kube-apiserver-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:14:30.616098  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:30.616117  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:30.616131  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:30.694878  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:30.694918  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:30.734373  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:30.734404  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:30.868463  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:30.868510  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:30.886980  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:30.887012  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:33.432443  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:33.432868  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:33.432915  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:33.432974  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:33.470250  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:33.470270  464519 cri.go:89] found id: ""
	I1129 10:14:33.470278  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:33.470337  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.473962  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:33.474036  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:33.515016  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:33.515039  464519 cri.go:89] found id: ""
	I1129 10:14:33.515047  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:33.515100  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.518732  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:33.518804  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:33.557911  464519 cri.go:89] found id: ""
	I1129 10:14:33.557939  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.557958  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:33.557965  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:33.558043  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:33.608145  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:33.608166  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:33.608171  464519 cri.go:89] found id: ""
	I1129 10:14:33.608178  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:33.608233  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.611828  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.615424  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:33.615503  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:33.655729  464519 cri.go:89] found id: ""
	I1129 10:14:33.655755  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.655764  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:33.655771  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:33.655832  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:33.695385  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:33.695456  464519 cri.go:89] found id: ""
	I1129 10:14:33.695479  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:33.695564  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.699100  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:33.699180  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:33.737669  464519 cri.go:89] found id: ""
	I1129 10:14:33.737732  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.737754  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:33.737773  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:33.737855  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:33.782943  464519 cri.go:89] found id: ""
	I1129 10:14:33.783020  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.783044  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:33.783072  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:33.783113  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:33.863899  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:33.863933  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:33.863954  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:33.907348  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:33.907384  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:33.986674  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:33.986711  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:34.026252  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:34.026293  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:34.064257  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:34.064286  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:34.127502  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:34.127539  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:34.173016  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:34.173045  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:34.290514  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:34.290550  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:34.309122  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:34.309153  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:36.848671  471543 pod_ready.go:94] pod "kube-apiserver-pause-377932" is "Ready"
	I1129 10:14:36.848707  471543 pod_ready.go:86] duration metric: took 2.00579282s for pod "kube-apiserver-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.851563  471543 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.856696  471543 pod_ready.go:94] pod "kube-controller-manager-pause-377932" is "Ready"
	I1129 10:14:36.856723  471543 pod_ready.go:86] duration metric: took 5.129213ms for pod "kube-controller-manager-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.859188  471543 pod_ready.go:83] waiting for pod "kube-proxy-5tg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.864043  471543 pod_ready.go:94] pod "kube-proxy-5tg9h" is "Ready"
	I1129 10:14:36.864076  471543 pod_ready.go:86] duration metric: took 4.859129ms for pod "kube-proxy-5tg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.866768  471543 pod_ready.go:83] waiting for pod "kube-scheduler-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:37.237857  471543 pod_ready.go:94] pod "kube-scheduler-pause-377932" is "Ready"
	I1129 10:14:37.237889  471543 pod_ready.go:86] duration metric: took 371.091872ms for pod "kube-scheduler-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:37.237902  471543 pod_ready.go:40] duration metric: took 10.921600408s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:14:37.312513  471543 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:14:37.315675  471543 out.go:179] * Done! kubectl is now configured to use "pause-377932" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.683753746Z" level=info msg="Started container" PID=2280 containerID=de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f description=kube-system/coredns-66bc5c9577-dzxhh/coredns id=0398bf7c-121a-4c32-a01c-2f25838761f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4852ecf0d91effe6c42e3c240b0963ba0026319e994acfc861f878a43636ecab
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.69692819Z" level=info msg="Created container dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52: kube-system/kube-apiserver-pause-377932/kube-apiserver" id=8cefdf61-5a18-47ea-8d96-f83450245163 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.734377137Z" level=info msg="Starting container: dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52" id=e3f28318-af9f-4ba7-ad00-1689e6a08032 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.741604078Z" level=info msg="Created container 90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260: kube-system/kube-controller-manager-pause-377932/kube-controller-manager" id=2355f528-61c5-4806-8a55-07c7ffcb2675 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.787035914Z" level=info msg="Starting container: 90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260" id=6d85e9e1-391c-4c27-8a9c-be67cd649947 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.793739207Z" level=info msg="Started container" PID=2307 containerID=dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52 description=kube-system/kube-apiserver-pause-377932/kube-apiserver id=e3f28318-af9f-4ba7-ad00-1689e6a08032 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed1ff32de33668b9a0aad827eea796c16aec16d37ef6aa8cb018a901253df67c
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.800211441Z" level=info msg="Started container" PID=2328 containerID=90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260 description=kube-system/kube-controller-manager-pause-377932/kube-controller-manager id=6d85e9e1-391c-4c27-8a9c-be67cd649947 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c58c3750328561ae211b495e26d51344cc65a1875511ece8b59a6584e4a5897d
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.851916821Z" level=info msg="Created container fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad: kube-system/etcd-pause-377932/etcd" id=50cf18a6-d2c7-4ed2-825f-57e0d3a5caf0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.864325645Z" level=info msg="Starting container: fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad" id=1c8b6db8-8610-4cc3-ba48-d1465be034fc name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.88748112Z" level=info msg="Started container" PID=2360 containerID=fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad description=kube-system/etcd-pause-377932/etcd id=1c8b6db8-8610-4cc3-ba48-d1465be034fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc0b6c9f1b458a547dbd7704d2012bf65e3d12879bbe00a870102d798b03b8a3
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.04702837Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.073804324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.073842971Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.073868555Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.081020632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.081204314Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.081301054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.085423314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.085613273Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.085695415Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.089742081Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.089916893Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.090005706Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.094854382Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.095023459Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	90b80185bedc6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   c58c375032856       kube-controller-manager-pause-377932   kube-system
	fed61c9e9742c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   bc0b6c9f1b458       etcd-pause-377932                      kube-system
	dfefbc3422a69       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   ed1ff32de3366       kube-apiserver-pause-377932            kube-system
	d4f345bc31ce0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   452a310a007d7       kube-scheduler-pause-377932            kube-system
	de83eb3c95e82       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   4852ecf0d91ef       coredns-66bc5c9577-dzxhh               kube-system
	8a6d93399096e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   c90a0729a6638       kindnet-8fr6g                          kube-system
	78219c83d3851       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   5d307e95d0dfa       kube-proxy-5tg9h                       kube-system
	863c9eb570a8e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   4852ecf0d91ef       coredns-66bc5c9577-dzxhh               kube-system
	22b7f675021b0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   c90a0729a6638       kindnet-8fr6g                          kube-system
	212c65a4a208b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   5d307e95d0dfa       kube-proxy-5tg9h                       kube-system
	7a3703bc7dce8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   452a310a007d7       kube-scheduler-pause-377932            kube-system
	d990448e5be14       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   c58c375032856       kube-controller-manager-pause-377932   kube-system
	b95158c79f68b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bc0b6c9f1b458       etcd-pause-377932                      kube-system
	7566fea0ee855       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   ed1ff32de3366       kube-apiserver-pause-377932            kube-system
	
	
	==> coredns [863c9eb570a8eedbd9cc558c29553aa219ca81df54cf5bacbb12a0581f16a6e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33892 - 58348 "HINFO IN 1428656686438777192.7857227502320497675. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023250327s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43488 - 15670 "HINFO IN 1690376347639942273.4027896821449090237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039049933s
	
	
	==> describe nodes <==
	Name:               pause-377932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-377932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=pause-377932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-377932
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:14:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:13:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:13:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:13:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:14:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-377932
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                edfa0727-0e2e-4553-9782-29a85f375619
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dzxhh                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-377932                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-8fr6g                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-pause-377932             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-377932    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-5tg9h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-377932             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 73s   kube-proxy       
	  Normal   Starting                 15s   kube-proxy       
	  Normal   Starting                 79s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-377932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-377932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-377932 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s   node-controller  Node pause-377932 event: Registered Node pause-377932 in Controller
	  Normal   NodeReady                33s   kubelet          Node pause-377932 status is now: NodeReady
	  Normal   RegisteredNode           12s   node-controller  Node pause-377932 event: Registered Node pause-377932 in Controller
	
	
	==> dmesg <==
	[Nov29 09:42] overlayfs: idmapped layers are currently not supported
	[Nov29 09:43] overlayfs: idmapped layers are currently not supported
	[Nov29 09:44] overlayfs: idmapped layers are currently not supported
	[  +2.899018] overlayfs: idmapped layers are currently not supported
	[ +47.632598] overlayfs: idmapped layers are currently not supported
	[Nov29 09:45] overlayfs: idmapped layers are currently not supported
	[Nov29 09:47] overlayfs: idmapped layers are currently not supported
	[Nov29 09:51] overlayfs: idmapped layers are currently not supported
	[Nov29 09:52] overlayfs: idmapped layers are currently not supported
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b95158c79f68b8cd69c81d30a1659ddc3402d6bec51b5fcdb3760a10ac2edba7] <==
	{"level":"warn","ts":"2025-11-29T10:13:17.202559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.218544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.236308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.269590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.287114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.299613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.395275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34508","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T10:14:11.497290Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-29T10:14:11.497336Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-377932","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-29T10:14:11.497433Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T10:14:11.497490Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T10:14:11.632799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T10:14:11.632951Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-11-29T10:14:11.632875Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T10:14:11.633008Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T10:14:11.633017Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-29T10:14:11.632934Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T10:14:11.633031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T10:14:11.633037Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T10:14:11.633068Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-29T10:14:11.633066Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-29T10:14:11.636360Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-29T10:14:11.636451Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T10:14:11.636493Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T10:14:11.636501Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-377932","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad] <==
	{"level":"warn","ts":"2025-11-29T10:14:23.215328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.236008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.259043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.280206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.322162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.360884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.401999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.436261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.454406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.486222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.517090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.549893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.578651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.626146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.638400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.658311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.675894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.704691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.723900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.735331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.759977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.792208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.811079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.829719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.978442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44472","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:14:40 up  2:57,  0 user,  load average: 2.43, 2.41, 2.09
	Linux pause-377932 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22b7f675021b00c8e1d0af5fda6f17e6544084f6156ef90fc1fd28fee7ce6893] <==
	I1129 10:13:26.836212       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:13:26.837052       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:13:26.837175       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:13:26.837187       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:13:26.837201       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:13:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:13:27.046860       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:13:27.046934       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:13:27.046967       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:13:27.047306       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:13:57.040232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:13:57.047751       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:13:57.047751       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:13:57.047836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:13:58.547827       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:13:58.547895       1 metrics.go:72] Registering metrics
	I1129 10:13:58.547959       1 controller.go:711] "Syncing nftables rules"
	I1129 10:14:07.045813       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:14:07.045871       1 main.go:301] handling current node
	
	
	==> kindnet [8a6d93399096ef757e9b847c20ed94f2b6ccecc303206436c36a708244a27ce7] <==
	I1129 10:14:19.742003       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:14:19.755707       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:14:19.755884       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:14:19.755898       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:14:19.755930       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:14:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1129 10:14:20.035035       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1129 10:14:20.035707       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:14:20.035891       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:14:20.035937       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:14:20.036395       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:14:20.036861       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:14:20.037046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:14:20.037531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:14:25.036866       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:14:25.036909       1 metrics.go:72] Registering metrics
	I1129 10:14:25.036972       1 controller.go:711] "Syncing nftables rules"
	I1129 10:14:30.046573       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:14:30.046681       1 main.go:301] handling current node
	I1129 10:14:40.036419       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:14:40.036487       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e] <==
	W1129 10:14:11.520502       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520551       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520599       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520643       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520694       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520744       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520792       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520843       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520891       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520938       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520988       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521037       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521083       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521357       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521423       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521476       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521542       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521596       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521669       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521748       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.522940       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.522990       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.523163       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.523313       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.523489       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52] <==
	I1129 10:14:24.914714       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 10:14:24.914742       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 10:14:24.915334       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:14:24.915547       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 10:14:24.915586       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:14:24.920137       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 10:14:24.922197       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 10:14:24.922700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 10:14:24.931152       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:14:24.932725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:14:24.932946       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:14:24.933801       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1129 10:14:24.935787       1 aggregator.go:171] initial CRD sync complete...
	I1129 10:14:24.935910       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 10:14:24.935943       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:14:24.935974       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:14:24.951762       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1129 10:14:24.963712       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:14:24.972792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:14:25.609136       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:14:26.907120       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:14:28.361604       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:14:28.412708       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:14:28.460862       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:14:28.563599       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260] <==
	I1129 10:14:28.181295       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:14:28.186603       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 10:14:28.187789       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 10:14:28.187887       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:14:28.189135       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 10:14:28.190289       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 10:14:28.199650       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:14:28.199674       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:14:28.199682       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:14:28.199757       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:14:28.199845       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:14:28.204192       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:14:28.204295       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:14:28.204314       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 10:14:28.204543       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 10:14:28.204980       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:14:28.205087       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:14:28.205260       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:14:28.205706       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-377932"
	I1129 10:14:28.205806       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 10:14:28.205895       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:14:28.206042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 10:14:28.206751       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:14:28.210229       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:14:28.217536       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	
	
	==> kube-controller-manager [d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e] <==
	I1129 10:13:25.335166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-377932" podCIDRs=["10.244.0.0/24"]
	I1129 10:13:25.341199       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:13:25.347489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:13:25.361397       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:13:25.361409       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:13:25.361537       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:13:25.361548       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:13:25.361649       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:13:25.361744       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-377932"
	I1129 10:13:25.361772       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:13:25.361852       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 10:13:25.361913       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 10:13:25.363760       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 10:13:25.363810       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:13:25.363931       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:13:25.364371       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 10:13:25.364535       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 10:13:25.364608       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:13:25.364834       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 10:13:25.372362       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:13:25.372522       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:13:25.372551       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 10:13:25.375657       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:13:25.378266       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 10:14:10.369981       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [212c65a4a208be355d66db67ca2c344f88277834c18676d7489f207b156349de] <==
	I1129 10:13:26.899791       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:13:27.025729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:13:27.128553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:13:27.128595       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:13:27.128660       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:13:27.229490       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:13:27.229553       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:13:27.234600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:13:27.234933       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:13:27.234955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:13:27.243287       1 config.go:200] "Starting service config controller"
	I1129 10:13:27.243315       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:13:27.243345       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:13:27.243349       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:13:27.243359       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:13:27.243363       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:13:27.250736       1 config.go:309] "Starting node config controller"
	I1129 10:13:27.250754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:13:27.250762       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:13:27.345961       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:13:27.346102       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:13:27.346116       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [78219c83d385108b3bcc98070a36edbf0ddd7d6f73bb01e36ceca4997e32bb82] <==
	I1129 10:14:22.429475       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:14:23.531101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:14:25.034373       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:14:25.034456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:14:25.034553       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:14:25.103963       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:14:25.104463       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:14:25.113391       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:14:25.113764       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:14:25.113982       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:14:25.115254       1 config.go:200] "Starting service config controller"
	I1129 10:14:25.115331       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:14:25.122852       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:14:25.122933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:14:25.122994       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:14:25.123022       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:14:25.125927       1 config.go:309] "Starting node config controller"
	I1129 10:14:25.126025       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:14:25.126040       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:14:25.218172       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:14:25.224007       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:14:25.224104       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a3703bc7dce8c8ea16c9ac2c0b27d5dd1a62a540c2de8856ca64adf38f3253a] <==
	E1129 10:13:19.230408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:13:19.230467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 10:13:19.230531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:13:19.230583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 10:13:19.230642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 10:13:19.230679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:13:19.230708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:13:19.230742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:13:19.230778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:13:19.230842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:13:19.230888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:13:19.230934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:13:19.230980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:13:19.231026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:13:19.231075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 10:13:19.231166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 10:13:19.231217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 10:13:19.231294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1129 10:13:20.504266       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:11.501451       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1129 10:14:11.501472       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1129 10:14:11.501495       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1129 10:14:11.501517       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:11.501725       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1129 10:14:11.501740       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d4f345bc31ce087d3d5bebc97a5baf10e78b82949811116ad88e27b783ee07a5] <==
	I1129 10:14:23.705518       1 serving.go:386] Generated self-signed cert in-memory
	W1129 10:14:24.778335       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 10:14:24.778451       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:14:24.778486       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 10:14:24.778538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 10:14:24.830308       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:14:24.830399       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:14:24.846858       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:14:24.847121       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:24.847176       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:24.847222       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:14:24.950319       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.405595    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8fr6g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1abc657-1bd4-43cc-860c-d23afb2e0cac" pod="kube-system/kindnet-8fr6g"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.405840    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-dzxhh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7f51682b-549d-403a-8927-01e86fc63f8b" pod="kube-system/coredns-66bc5c9577-dzxhh"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.406103    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="df00f14dc171d1b8ca1aed5155b9dc40" pod="kube-system/kube-scheduler-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: I1129 10:14:19.412761    1299 scope.go:117] "RemoveContainer" containerID="7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.413364    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="8e15dae51d36677871c02c1439d311cf" pod="kube-system/kube-apiserver-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.413570    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tg9h\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d" pod="kube-system/kube-proxy-5tg9h"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.413780    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8fr6g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1abc657-1bd4-43cc-860c-d23afb2e0cac" pod="kube-system/kindnet-8fr6g"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.414016    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-dzxhh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7f51682b-549d-403a-8927-01e86fc63f8b" pod="kube-system/coredns-66bc5c9577-dzxhh"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.420264    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="df00f14dc171d1b8ca1aed5155b9dc40" pod="kube-system/kube-scheduler-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.420585    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1c453fe7929af19256abbd914af6971" pod="kube-system/etcd-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.430817    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-dzxhh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7f51682b-549d-403a-8927-01e86fc63f8b" pod="kube-system/coredns-66bc5c9577-dzxhh"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431080    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="df00f14dc171d1b8ca1aed5155b9dc40" pod="kube-system/kube-scheduler-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431366    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1c453fe7929af19256abbd914af6971" pod="kube-system/etcd-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431580    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="15c1a738221249b75177a6b68255993d" pod="kube-system/kube-controller-manager-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: I1129 10:14:19.431744    1299 scope.go:117] "RemoveContainer" containerID="d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431984    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="8e15dae51d36677871c02c1439d311cf" pod="kube-system/kube-apiserver-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.432481    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tg9h\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d" pod="kube-system/kube-proxy-5tg9h"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.432755    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8fr6g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1abc657-1bd4-43cc-860c-d23afb2e0cac" pod="kube-system/kindnet-8fr6g"
	Nov 29 10:14:24 pause-377932 kubelet[1299]: E1129 10:14:24.880194    1299 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-377932\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-377932' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 29 10:14:24 pause-377932 kubelet[1299]: E1129 10:14:24.880943    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-377932\" is forbidden: User \"system:node:pause-377932\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-377932' and this object" podUID="8e15dae51d36677871c02c1439d311cf" pod="kube-system/kube-apiserver-pause-377932"
	Nov 29 10:14:24 pause-377932 kubelet[1299]: E1129 10:14:24.882335    1299 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-377932\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-377932' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 29 10:14:31 pause-377932 kubelet[1299]: W1129 10:14:31.372665    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 29 10:14:37 pause-377932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:14:37 pause-377932 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:14:37 pause-377932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-377932 -n pause-377932
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-377932 -n pause-377932: exit status 2 (435.957165ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-377932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-377932
helpers_test.go:243: (dbg) docker inspect pause-377932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926",
	        "Created": "2025-11-29T10:12:56.64444144Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 468060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:12:56.705483694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/hostname",
	        "HostsPath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/hosts",
	        "LogPath": "/var/lib/docker/containers/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926/c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926-json.log",
	        "Name": "/pause-377932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-377932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-377932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c57b58a3396e0f59dd4e5e353610daa960e51c7404ef35e014f93cb9519d1926",
	                "LowerDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d660a8b90db9db67e15a19355514f2511645fe67af9ca391176d9cb9ddb9a7b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-377932",
	                "Source": "/var/lib/docker/volumes/pause-377932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-377932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-377932",
	                "name.minikube.sigs.k8s.io": "pause-377932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da248243934a9d7b24553e5e68da3f5ecf02a9d578c3598d5802d7662405ed8e",
	            "SandboxKey": "/var/run/docker/netns/da248243934a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-377932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:11:02:cb:7c:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37a5cdec6122efe72d1ddafdb31a4be1ec5e23b8fef2a3904e2b7ffc60825d9f",
	                    "EndpointID": "2e31f421f8f281334209c17723f2032e256281d84833b0eebafa13d0ecfa44d7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-377932",
	                        "c57b58a3396e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-377932 -n pause-377932
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-377932 -n pause-377932: exit status 2 (374.996519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-377932 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-377932 logs -n 25: (1.356112265s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:04 UTC │ 29 Nov 25 10:06 UTC │
	│ start   │ -p missing-upgrade-246693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-246693    │ jenkins │ v1.37.0 │ 29 Nov 25 10:05 UTC │ 29 Nov 25 10:05 UTC │
	│ delete  │ -p missing-upgrade-246693                                                                                                                │ missing-upgrade-246693    │ jenkins │ v1.37.0 │ 29 Nov 25 10:05 UTC │ 29 Nov 25 10:05 UTC │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:05 UTC │ 29 Nov 25 10:06 UTC │
	│ stop    │ -p kubernetes-upgrade-510809                                                                                                             │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:06 UTC │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:11 UTC │
	│ delete  │ -p NoKubernetes-399835                                                                                                                   │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:06 UTC │
	│ start   │ -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │ 29 Nov 25 10:06 UTC │
	│ ssh     │ -p NoKubernetes-399835 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:06 UTC │                     │
	│ stop    │ -p NoKubernetes-399835                                                                                                                   │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:07 UTC │
	│ start   │ -p NoKubernetes-399835 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:07 UTC │
	│ ssh     │ -p NoKubernetes-399835 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │                     │
	│ delete  │ -p NoKubernetes-399835                                                                                                                   │ NoKubernetes-399835       │ jenkins │ v1.37.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:07 UTC │
	│ start   │ -p stopped-upgrade-467241 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-467241    │ jenkins │ v1.35.0 │ 29 Nov 25 10:07 UTC │ 29 Nov 25 10:08 UTC │
	│ stop    │ stopped-upgrade-467241 stop                                                                                                              │ stopped-upgrade-467241    │ jenkins │ v1.35.0 │ 29 Nov 25 10:08 UTC │ 29 Nov 25 10:08 UTC │
	│ start   │ -p stopped-upgrade-467241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-467241    │ jenkins │ v1.37.0 │ 29 Nov 25 10:08 UTC │ 29 Nov 25 10:12 UTC │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:11 UTC │                     │
	│ start   │ -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:11 UTC │ 29 Nov 25 10:11 UTC │
	│ delete  │ -p kubernetes-upgrade-510809                                                                                                             │ kubernetes-upgrade-510809 │ jenkins │ v1.37.0 │ 29 Nov 25 10:11 UTC │ 29 Nov 25 10:11 UTC │
	│ start   │ -p running-upgrade-493711 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-493711    │ jenkins │ v1.35.0 │ 29 Nov 25 10:11 UTC │ 29 Nov 25 10:12 UTC │
	│ start   │ -p running-upgrade-493711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-493711    │ jenkins │ v1.37.0 │ 29 Nov 25 10:12 UTC │                     │
	│ delete  │ -p stopped-upgrade-467241                                                                                                                │ stopped-upgrade-467241    │ jenkins │ v1.37.0 │ 29 Nov 25 10:12 UTC │ 29 Nov 25 10:12 UTC │
	│ start   │ -p pause-377932 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-377932              │ jenkins │ v1.37.0 │ 29 Nov 25 10:12 UTC │ 29 Nov 25 10:14 UTC │
	│ start   │ -p pause-377932 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-377932              │ jenkins │ v1.37.0 │ 29 Nov 25 10:14 UTC │ 29 Nov 25 10:14 UTC │
	│ pause   │ -p pause-377932 --alsologtostderr -v=5                                                                                                   │ pause-377932              │ jenkins │ v1.37.0 │ 29 Nov 25 10:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:14:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:14:10.207255  471543 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:14:10.207433  471543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:14:10.207463  471543 out.go:374] Setting ErrFile to fd 2...
	I1129 10:14:10.207484  471543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:14:10.207748  471543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:14:10.208124  471543 out.go:368] Setting JSON to false
	I1129 10:14:10.209140  471543 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10600,"bootTime":1764400651,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:14:10.209243  471543 start.go:143] virtualization:  
	I1129 10:14:10.215134  471543 out.go:179] * [pause-377932] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:14:10.218360  471543 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:14:10.218450  471543 notify.go:221] Checking for updates...
	I1129 10:14:10.224508  471543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:14:10.227551  471543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:14:10.230533  471543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:14:10.233450  471543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:14:10.236263  471543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:14:10.239761  471543 config.go:182] Loaded profile config "pause-377932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:14:10.240650  471543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:14:10.263899  471543 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:14:10.264039  471543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:14:10.331416  471543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:14:10.319867528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:14:10.331530  471543 docker.go:319] overlay module found
	I1129 10:14:10.334725  471543 out.go:179] * Using the docker driver based on existing profile
	I1129 10:14:10.337521  471543 start.go:309] selected driver: docker
	I1129 10:14:10.337537  471543 start.go:927] validating driver "docker" against &{Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:14:10.337683  471543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:14:10.337793  471543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:14:10.400110  471543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:14:10.39071891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:14:10.400511  471543 cni.go:84] Creating CNI manager for ""
	I1129 10:14:10.400579  471543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:14:10.400631  471543 start.go:353] cluster config:
	{Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:14:10.405538  471543 out.go:179] * Starting "pause-377932" primary control-plane node in "pause-377932" cluster
	I1129 10:14:10.408543  471543 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:14:10.411584  471543 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:14:10.414670  471543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:14:10.414967  471543 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:14:10.415010  471543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:14:10.415033  471543 cache.go:65] Caching tarball of preloaded images
	I1129 10:14:10.415097  471543 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:14:10.415111  471543 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:14:10.415253  471543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/config.json ...
	I1129 10:14:10.436486  471543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:14:10.436509  471543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:14:10.436525  471543 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:14:10.436559  471543 start.go:360] acquireMachinesLock for pause-377932: {Name:mkfed25658d78d0770cd24f56da636a13fb6ca68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:14:10.436636  471543 start.go:364] duration metric: took 45.596µs to acquireMachinesLock for "pause-377932"
	I1129 10:14:10.436664  471543 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:14:10.436673  471543 fix.go:54] fixHost starting: 
	I1129 10:14:10.436962  471543 cli_runner.go:164] Run: docker container inspect pause-377932 --format={{.State.Status}}
	I1129 10:14:10.454556  471543 fix.go:112] recreateIfNeeded on pause-377932: state=Running err=<nil>
	W1129 10:14:10.454597  471543 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:14:08.172998  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:37352->192.168.85.2:8443: read: connection reset by peer
	I1129 10:14:08.173053  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:08.173114  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:08.225227  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:08.225300  464519 cri.go:89] found id: "4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	I1129 10:14:08.225312  464519 cri.go:89] found id: ""
	I1129 10:14:08.225320  464519 logs.go:282] 2 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4]
	I1129 10:14:08.225375  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.229340  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.233062  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:08.233162  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:08.273389  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:08.273411  464519 cri.go:89] found id: ""
	I1129 10:14:08.273419  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:08.273474  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.276989  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:08.277061  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:08.313801  464519 cri.go:89] found id: ""
	I1129 10:14:08.313827  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.313836  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:08.313843  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:08.313903  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:08.420617  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:08.420639  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:08.420644  464519 cri.go:89] found id: ""
	I1129 10:14:08.420652  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:08.420706  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.428464  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.434281  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:08.434398  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:08.487054  464519 cri.go:89] found id: ""
	I1129 10:14:08.487079  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.487089  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:08.487096  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:08.487155  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:08.542050  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:08.542099  464519 cri.go:89] found id: ""
	I1129 10:14:08.542109  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:08.542165  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:08.545804  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:08.545872  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:08.591293  464519 cri.go:89] found id: ""
	I1129 10:14:08.591333  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.591349  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:08.591356  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:08.591412  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:08.636753  464519 cri.go:89] found id: ""
	I1129 10:14:08.636777  464519 logs.go:282] 0 containers: []
	W1129 10:14:08.636785  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:08.636794  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:08.636806  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:08.657322  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:08.657392  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:08.733280  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:08.733302  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:08.733314  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:08.784240  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:08.784271  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:08.827206  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:08.827237  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:08.897456  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:08.897493  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:09.026186  464519 logs.go:123] Gathering logs for kube-apiserver [4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4] ...
	I1129 10:14:09.026227  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	W1129 10:14:09.067642  464519 logs.go:130] failed kube-apiserver [4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4": Process exited with status 1
	stdout:
	
	stderr:
	E1129 10:14:09.064368    4179 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist" containerID="4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	time="2025-11-29T10:14:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1129 10:14:09.064368    4179 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist" containerID="4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4"
	time="2025-11-29T10:14:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4\": container with ID starting with 4ec01ca6e01e3f8182080077a554bff272a4b70cb0768f4c9629baa6b2eae9d4 not found: ID does not exist"
	
	** /stderr **
	I1129 10:14:09.067664  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:09.067677  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:09.148112  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:09.148151  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:09.194125  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:09.194152  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:09.232493  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:09.232521  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:10.457822  471543 out.go:252] * Updating the running docker "pause-377932" container ...
	I1129 10:14:10.457871  471543 machine.go:94] provisionDockerMachine start ...
	I1129 10:14:10.457960  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:10.475915  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:10.476250  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:10.476267  471543 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:14:10.629781  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-377932
	
	I1129 10:14:10.629823  471543 ubuntu.go:182] provisioning hostname "pause-377932"
	I1129 10:14:10.629934  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:10.647922  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:10.648240  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:10.648257  471543 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-377932 && echo "pause-377932" | sudo tee /etc/hostname
	I1129 10:14:10.815356  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-377932
	
	I1129 10:14:10.815436  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:10.834858  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:10.835184  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:10.835224  471543 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-377932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-377932/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-377932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:14:10.990408  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:14:10.990449  471543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:14:10.990484  471543 ubuntu.go:190] setting up certificates
	I1129 10:14:10.990498  471543 provision.go:84] configureAuth start
	I1129 10:14:10.990559  471543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-377932
	I1129 10:14:11.020546  471543 provision.go:143] copyHostCerts
	I1129 10:14:11.020629  471543 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:14:11.020648  471543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:14:11.020725  471543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:14:11.020838  471543 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:14:11.020850  471543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:14:11.020878  471543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:14:11.020944  471543 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:14:11.020958  471543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:14:11.020985  471543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:14:11.021044  471543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.pause-377932 san=[127.0.0.1 192.168.76.2 localhost minikube pause-377932]
	I1129 10:14:11.119304  471543 provision.go:177] copyRemoteCerts
	I1129 10:14:11.119384  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:14:11.119435  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:11.138842  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:11.245778  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:14:11.263725  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 10:14:11.281976  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 10:14:11.299696  471543 provision.go:87] duration metric: took 309.172661ms to configureAuth
	I1129 10:14:11.299764  471543 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:14:11.300006  471543 config.go:182] Loaded profile config "pause-377932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:14:11.300148  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:11.319068  471543 main.go:143] libmachine: Using SSH client type: native
	I1129 10:14:11.319374  471543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1129 10:14:11.319393  471543 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:14:11.779615  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:11.780087  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:11.780138  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:11.780196  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:11.818679  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:11.818702  464519 cri.go:89] found id: ""
	I1129 10:14:11.818711  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:11.818767  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.822258  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:11.822337  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:11.861843  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:11.861866  464519 cri.go:89] found id: ""
	I1129 10:14:11.861874  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:11.861935  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.865884  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:11.865963  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:11.903181  464519 cri.go:89] found id: ""
	I1129 10:14:11.903204  464519 logs.go:282] 0 containers: []
	W1129 10:14:11.903212  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:11.903219  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:11.903278  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:11.941785  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:11.941808  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:11.941813  464519 cri.go:89] found id: ""
	I1129 10:14:11.941820  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:11.941883  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.945523  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:11.948924  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:11.948994  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:11.985791  464519 cri.go:89] found id: ""
	I1129 10:14:11.985822  464519 logs.go:282] 0 containers: []
	W1129 10:14:11.985830  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:11.985838  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:11.985895  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:12.023905  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:12.023936  464519 cri.go:89] found id: ""
	I1129 10:14:12.023946  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:12.024012  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:12.027847  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:12.027955  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:12.064764  464519 cri.go:89] found id: ""
	I1129 10:14:12.064789  464519 logs.go:282] 0 containers: []
	W1129 10:14:12.064797  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:12.064804  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:12.064863  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:12.106266  464519 cri.go:89] found id: ""
	I1129 10:14:12.106303  464519 logs.go:282] 0 containers: []
	W1129 10:14:12.106312  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:12.106327  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:12.106338  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:12.226097  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:12.226139  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:12.271856  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:12.271889  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:12.317079  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:12.317125  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:12.356616  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:12.356654  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:12.404789  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:12.404815  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:12.468578  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:12.468618  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:12.486340  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:12.486368  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:12.557406  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:12.557425  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:12.557437  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:12.661847  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:12.661887  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:15.211328  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:15.211782  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:15.211833  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:15.211905  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:15.248588  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:15.248614  464519 cri.go:89] found id: ""
	I1129 10:14:15.248624  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:15.248695  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.252308  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:15.252383  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:15.291407  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:15.291428  464519 cri.go:89] found id: ""
	I1129 10:14:15.291436  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:15.291497  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.295159  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:15.295229  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:15.331159  464519 cri.go:89] found id: ""
	I1129 10:14:15.331185  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.331193  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:15.331202  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:15.331261  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:15.371539  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:15.371562  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:15.371567  464519 cri.go:89] found id: ""
	I1129 10:14:15.371585  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:15.371642  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.375287  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.380027  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:15.380105  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:15.418069  464519 cri.go:89] found id: ""
	I1129 10:14:15.418113  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.418121  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:15.418127  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:15.418186  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:15.456411  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:15.456438  464519 cri.go:89] found id: ""
	I1129 10:14:15.456447  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:15.456504  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:15.460082  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:15.460176  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:15.501782  464519 cri.go:89] found id: ""
	I1129 10:14:15.501855  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.501870  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:15.501878  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:15.501937  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:15.539510  464519 cri.go:89] found id: ""
	I1129 10:14:15.539546  464519 logs.go:282] 0 containers: []
	W1129 10:14:15.539554  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:15.539568  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:15.539580  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:16.705832  471543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:14:16.705852  471543 machine.go:97] duration metric: took 6.247971843s to provisionDockerMachine
	I1129 10:14:16.705862  471543 start.go:293] postStartSetup for "pause-377932" (driver="docker")
	I1129 10:14:16.705873  471543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:14:16.705957  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:14:16.706008  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:16.724977  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:16.830227  471543 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:14:16.833715  471543 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:14:16.833744  471543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:14:16.833755  471543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:14:16.833841  471543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:14:16.833968  471543 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:14:16.834102  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:14:16.842157  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:14:16.861131  471543 start.go:296] duration metric: took 155.252059ms for postStartSetup
	I1129 10:14:16.861314  471543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:14:16.861374  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:16.880256  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:16.983412  471543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:14:16.988619  471543 fix.go:56] duration metric: took 6.551939965s for fixHost
	I1129 10:14:16.988646  471543 start.go:83] releasing machines lock for "pause-377932", held for 6.551998213s
	I1129 10:14:16.988714  471543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-377932
	I1129 10:14:17.006355  471543 ssh_runner.go:195] Run: cat /version.json
	I1129 10:14:17.006448  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:17.006735  471543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:14:17.006810  471543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-377932
	I1129 10:14:17.027187  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:17.027238  471543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/pause-377932/id_rsa Username:docker}
	I1129 10:14:17.221187  471543 ssh_runner.go:195] Run: systemctl --version
	I1129 10:14:17.229160  471543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:14:17.272984  471543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:14:17.277349  471543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:14:17.277431  471543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:14:17.285453  471543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:14:17.285476  471543 start.go:496] detecting cgroup driver to use...
	I1129 10:14:17.285508  471543 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:14:17.285558  471543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:14:17.300950  471543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:14:17.314379  471543 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:14:17.314566  471543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:14:17.330673  471543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:14:17.344861  471543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:14:17.494696  471543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:14:17.633763  471543 docker.go:234] disabling docker service ...
	I1129 10:14:17.633838  471543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:14:17.649049  471543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:14:17.662291  471543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:14:17.800222  471543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:14:17.947915  471543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:14:17.961235  471543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:14:17.977117  471543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:14:17.977230  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:17.986701  471543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:14:17.986822  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:17.996512  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.008104  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.019473  471543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:14:18.028475  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.037900  471543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.046759  471543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:14:18.055797  471543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:14:18.063911  471543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:14:18.071633  471543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:14:18.200600  471543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:14:18.421068  471543 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:14:18.421217  471543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:14:18.425281  471543 start.go:564] Will wait 60s for crictl version
	I1129 10:14:18.425379  471543 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.429159  471543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:14:18.459280  471543 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:14:18.459435  471543 ssh_runner.go:195] Run: crio --version
	I1129 10:14:18.490122  471543 ssh_runner.go:195] Run: crio --version
	I1129 10:14:18.522496  471543 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:14:18.525460  471543 cli_runner.go:164] Run: docker network inspect pause-377932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:14:18.543075  471543 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:14:18.547146  471543 kubeadm.go:884] updating cluster {Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:14:18.547300  471543 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:14:18.547361  471543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:14:18.585847  471543 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:14:18.585874  471543 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:14:18.585951  471543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:14:18.617356  471543 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:14:18.617377  471543 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:14:18.617385  471543 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:14:18.617486  471543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-377932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:14:18.617559  471543 ssh_runner.go:195] Run: crio config
	I1129 10:14:18.705913  471543 cni.go:84] Creating CNI manager for ""
	I1129 10:14:18.705986  471543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:14:18.706018  471543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:14:18.706108  471543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-377932 NodeName:pause-377932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:14:18.706277  471543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-377932"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:14:18.706390  471543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:14:18.714440  471543 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:14:18.714558  471543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:14:18.722204  471543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1129 10:14:18.735893  471543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:14:18.750047  471543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1129 10:14:18.765728  471543 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:14:18.770164  471543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:14:18.955212  471543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:14:18.971931  471543 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932 for IP: 192.168.76.2
	I1129 10:14:18.971951  471543 certs.go:195] generating shared ca certs ...
	I1129 10:14:18.971968  471543 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:14:18.972115  471543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:14:18.972157  471543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:14:18.972164  471543 certs.go:257] generating profile certs ...
	I1129 10:14:18.972246  471543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.key
	I1129 10:14:18.972314  471543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/apiserver.key.83655726
	I1129 10:14:18.972356  471543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/proxy-client.key
	I1129 10:14:18.972461  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:14:18.972490  471543 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:14:18.972498  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:14:18.972527  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:14:18.972554  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:14:18.972579  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:14:18.972620  471543 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:14:18.973191  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:14:18.998957  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:14:19.026131  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:14:19.053720  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:14:19.077937  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 10:14:19.100973  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:14:19.125515  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:14:19.144852  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:14:19.163290  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:14:19.181519  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:14:19.202013  471543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:14:19.221657  471543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:14:19.237854  471543 ssh_runner.go:195] Run: openssl version
	I1129 10:14:19.244916  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:14:19.254448  471543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:14:19.262275  471543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:14:19.262398  471543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:14:19.315127  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:14:19.324658  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:14:19.334386  471543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:14:19.352484  471543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:14:19.352606  471543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:14:19.498447  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:14:19.532733  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:14:19.546587  471543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:14:19.555020  471543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:14:19.555093  471543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:14:19.696863  471543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:14:19.725007  471543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:14:19.742602  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:14:19.929703  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:14:20.017091  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:14:20.095234  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:14:20.158829  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:14:20.215645  471543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:14:20.264149  471543 kubeadm.go:401] StartCluster: {Name:pause-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:14:20.264281  471543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:14:20.264342  471543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:14:20.303221  471543 cri.go:89] found id: "90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260"
	I1129 10:14:20.303242  471543 cri.go:89] found id: "fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad"
	I1129 10:14:20.303246  471543 cri.go:89] found id: "dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52"
	I1129 10:14:20.303250  471543 cri.go:89] found id: "d4f345bc31ce087d3d5bebc97a5baf10e78b82949811116ad88e27b783ee07a5"
	I1129 10:14:20.303253  471543 cri.go:89] found id: "de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f"
	I1129 10:14:20.303257  471543 cri.go:89] found id: "8a6d93399096ef757e9b847c20ed94f2b6ccecc303206436c36a708244a27ce7"
	I1129 10:14:20.303260  471543 cri.go:89] found id: "78219c83d385108b3bcc98070a36edbf0ddd7d6f73bb01e36ceca4997e32bb82"
	I1129 10:14:20.303263  471543 cri.go:89] found id: "863c9eb570a8eedbd9cc558c29553aa219ca81df54cf5bacbb12a0581f16a6e2"
	I1129 10:14:20.303266  471543 cri.go:89] found id: "22b7f675021b00c8e1d0af5fda6f17e6544084f6156ef90fc1fd28fee7ce6893"
	I1129 10:14:20.303272  471543 cri.go:89] found id: "212c65a4a208be355d66db67ca2c344f88277834c18676d7489f207b156349de"
	I1129 10:14:20.303276  471543 cri.go:89] found id: "7a3703bc7dce8c8ea16c9ac2c0b27d5dd1a62a540c2de8856ca64adf38f3253a"
	I1129 10:14:20.303279  471543 cri.go:89] found id: "d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e"
	I1129 10:14:20.303281  471543 cri.go:89] found id: "b95158c79f68b8cd69c81d30a1659ddc3402d6bec51b5fcdb3760a10ac2edba7"
	I1129 10:14:20.303284  471543 cri.go:89] found id: "7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e"
	I1129 10:14:20.303287  471543 cri.go:89] found id: ""
	I1129 10:14:20.303336  471543 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:14:20.334479  471543 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:14:20Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:14:20.334624  471543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:14:20.346583  471543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:14:20.346658  471543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:14:20.346731  471543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:14:20.359392  471543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:14:20.360130  471543 kubeconfig.go:125] found "pause-377932" server: "https://192.168.76.2:8443"
	I1129 10:14:20.361086  471543 kapi.go:59] client config for pause-377932: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.key", CAFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 10:14:20.361839  471543 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1129 10:14:20.361920  471543 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1129 10:14:20.361941  471543 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1129 10:14:20.361971  471543 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1129 10:14:20.361992  471543 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1129 10:14:20.362448  471543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:14:20.371795  471543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 10:14:20.371878  471543 kubeadm.go:602] duration metric: took 25.200264ms to restartPrimaryControlPlane
	I1129 10:14:20.371904  471543 kubeadm.go:403] duration metric: took 107.763703ms to StartCluster
	I1129 10:14:20.371944  471543 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:14:20.372044  471543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:14:20.373052  471543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:14:20.373354  471543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:14:20.373806  471543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:14:20.373977  471543 config.go:182] Loaded profile config "pause-377932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:14:20.377407  471543 out.go:179] * Verifying Kubernetes components...
	I1129 10:14:20.377513  471543 out.go:179] * Enabled addons: 
	I1129 10:14:15.653084  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:15.653125  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:15.722376  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:15.722393  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:15.722406  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:15.766848  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:15.766878  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:15.846086  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:15.846126  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:15.887481  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:15.887510  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:15.939346  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:15.939375  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:15.957751  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:15.957783  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:16.002665  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:16.002756  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:16.045224  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:16.045300  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:18.607469  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:18.607856  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:18.607909  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:18.607968  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:18.658452  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:18.658479  464519 cri.go:89] found id: ""
	I1129 10:14:18.658495  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:18.658563  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.663167  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:18.663256  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:18.736250  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:18.736282  464519 cri.go:89] found id: ""
	I1129 10:14:18.736291  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:18.736363  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.741561  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:18.741631  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:18.800886  464519 cri.go:89] found id: ""
	I1129 10:14:18.800919  464519 logs.go:282] 0 containers: []
	W1129 10:14:18.800928  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:18.800935  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:18.801004  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:18.871691  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:18.871715  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:18.871720  464519 cri.go:89] found id: ""
	I1129 10:14:18.871728  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:18.871788  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.875696  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.879792  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:18.879863  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:18.924788  464519 cri.go:89] found id: ""
	I1129 10:14:18.924809  464519 logs.go:282] 0 containers: []
	W1129 10:14:18.924817  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:18.924823  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:18.924878  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:18.978007  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:18.978025  464519 cri.go:89] found id: ""
	I1129 10:14:18.978033  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:18.978109  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:18.983041  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:18.983114  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:19.035307  464519 cri.go:89] found id: ""
	I1129 10:14:19.035329  464519 logs.go:282] 0 containers: []
	W1129 10:14:19.035338  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:19.035344  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:19.035404  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:19.084528  464519 cri.go:89] found id: ""
	I1129 10:14:19.084605  464519 logs.go:282] 0 containers: []
	W1129 10:14:19.084631  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:19.084649  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:19.084664  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:19.243566  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:19.243635  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:19.294884  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:19.294963  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:19.367411  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:19.367482  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:19.442814  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:19.442899  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:19.524659  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:19.524691  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:19.626340  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:19.626409  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:19.655258  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:19.655344  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:19.783414  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:19.783476  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:19.783504  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:19.869879  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:19.869953  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:20.381445  471543 addons.go:530] duration metric: took 7.627627ms for enable addons: enabled=[]
	I1129 10:14:20.381603  471543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:14:20.652550  471543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:14:20.670850  471543 node_ready.go:35] waiting up to 6m0s for node "pause-377932" to be "Ready" ...
	I1129 10:14:24.757175  471543 node_ready.go:49] node "pause-377932" is "Ready"
	I1129 10:14:24.757257  471543 node_ready.go:38] duration metric: took 4.086323201s for node "pause-377932" to be "Ready" ...
	I1129 10:14:24.757285  471543 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:14:24.757372  471543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:14:24.775980  471543 api_server.go:72] duration metric: took 4.402566963s to wait for apiserver process to appear ...
	I1129 10:14:24.776067  471543 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:14:24.776102  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:24.817468  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 10:14:24.817585  471543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 10:14:22.516247  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:22.516621  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:22.516669  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:22.516729  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:22.588017  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:22.588043  464519 cri.go:89] found id: ""
	I1129 10:14:22.588052  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:22.588110  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.592540  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:22.592616  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:22.666679  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:22.666707  464519 cri.go:89] found id: ""
	I1129 10:14:22.666716  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:22.666774  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.678004  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:22.678203  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:22.756247  464519 cri.go:89] found id: ""
	I1129 10:14:22.756277  464519 logs.go:282] 0 containers: []
	W1129 10:14:22.756286  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:22.756293  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:22.756354  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:22.840108  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:22.840132  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:22.840137  464519 cri.go:89] found id: ""
	I1129 10:14:22.840145  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:22.840198  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.844090  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.847688  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:22.847765  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:22.912524  464519 cri.go:89] found id: ""
	I1129 10:14:22.912553  464519 logs.go:282] 0 containers: []
	W1129 10:14:22.912562  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:22.912569  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:22.912680  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:22.970963  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:22.970989  464519 cri.go:89] found id: ""
	I1129 10:14:22.970998  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:22.971054  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:22.975500  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:22.975620  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:23.022858  464519 cri.go:89] found id: ""
	I1129 10:14:23.022887  464519 logs.go:282] 0 containers: []
	W1129 10:14:23.022895  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:23.022902  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:23.022962  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:23.075279  464519 cri.go:89] found id: ""
	I1129 10:14:23.075316  464519 logs.go:282] 0 containers: []
	W1129 10:14:23.075325  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:23.075358  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:23.075401  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:23.134297  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:23.134331  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:23.256293  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:23.256330  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:23.331802  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:23.331840  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:23.476444  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:23.476488  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:23.603443  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:23.603467  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:23.603480  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:23.682825  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:23.682857  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:23.748553  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:23.748596  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:23.817833  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:23.817866  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:23.906450  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:23.906490  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:25.276173  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:25.288138  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:14:25.288234  471543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:14:25.776525  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:25.784997  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:14:25.785074  471543 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:14:26.276787  471543 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:14:26.284965  471543 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:14:26.286021  471543 api_server.go:141] control plane version: v1.34.1
	I1129 10:14:26.286045  471543 api_server.go:131] duration metric: took 1.509957018s to wait for apiserver health ...
	I1129 10:14:26.286055  471543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:14:26.289574  471543 system_pods.go:59] 7 kube-system pods found
	I1129 10:14:26.289620  471543 system_pods.go:61] "coredns-66bc5c9577-dzxhh" [7f51682b-549d-403a-8927-01e86fc63f8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:14:26.289629  471543 system_pods.go:61] "etcd-pause-377932" [0fcb1c6d-c6d2-48a9-b7e9-e4399f9995ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:14:26.289635  471543 system_pods.go:61] "kindnet-8fr6g" [a1abc657-1bd4-43cc-860c-d23afb2e0cac] Running
	I1129 10:14:26.289641  471543 system_pods.go:61] "kube-apiserver-pause-377932" [4ae1d404-657d-4d83-a47d-1155bab5da50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:14:26.289647  471543 system_pods.go:61] "kube-controller-manager-pause-377932" [b0b1b8f9-4a25-44f5-a305-c25e419a2d50] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:14:26.289658  471543 system_pods.go:61] "kube-proxy-5tg9h" [7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d] Running
	I1129 10:14:26.289664  471543 system_pods.go:61] "kube-scheduler-pause-377932" [4546f2ec-ec43-4e8d-8933-295179d3d385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:14:26.289673  471543 system_pods.go:74] duration metric: took 3.610567ms to wait for pod list to return data ...
	I1129 10:14:26.289684  471543 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:14:26.292221  471543 default_sa.go:45] found service account: "default"
	I1129 10:14:26.292246  471543 default_sa.go:55] duration metric: took 2.555827ms for default service account to be created ...
	I1129 10:14:26.292266  471543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:14:26.295033  471543 system_pods.go:86] 7 kube-system pods found
	I1129 10:14:26.295072  471543 system_pods.go:89] "coredns-66bc5c9577-dzxhh" [7f51682b-549d-403a-8927-01e86fc63f8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:14:26.295081  471543 system_pods.go:89] "etcd-pause-377932" [0fcb1c6d-c6d2-48a9-b7e9-e4399f9995ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:14:26.295091  471543 system_pods.go:89] "kindnet-8fr6g" [a1abc657-1bd4-43cc-860c-d23afb2e0cac] Running
	I1129 10:14:26.295104  471543 system_pods.go:89] "kube-apiserver-pause-377932" [4ae1d404-657d-4d83-a47d-1155bab5da50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:14:26.295121  471543 system_pods.go:89] "kube-controller-manager-pause-377932" [b0b1b8f9-4a25-44f5-a305-c25e419a2d50] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:14:26.295130  471543 system_pods.go:89] "kube-proxy-5tg9h" [7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d] Running
	I1129 10:14:26.295148  471543 system_pods.go:89] "kube-scheduler-pause-377932" [4546f2ec-ec43-4e8d-8933-295179d3d385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:14:26.295159  471543 system_pods.go:126] duration metric: took 2.888089ms to wait for k8s-apps to be running ...
	I1129 10:14:26.295167  471543 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:14:26.295233  471543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:14:26.309354  471543 system_svc.go:56] duration metric: took 14.175325ms WaitForService to wait for kubelet
	I1129 10:14:26.309389  471543 kubeadm.go:587] duration metric: took 5.935981276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:14:26.309424  471543 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:14:26.312326  471543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:14:26.312353  471543 node_conditions.go:123] node cpu capacity is 2
	I1129 10:14:26.312366  471543 node_conditions.go:105] duration metric: took 2.937238ms to run NodePressure ...
	I1129 10:14:26.312379  471543 start.go:242] waiting for startup goroutines ...
	I1129 10:14:26.312387  471543 start.go:247] waiting for cluster config update ...
	I1129 10:14:26.312395  471543 start.go:256] writing updated cluster config ...
	I1129 10:14:26.312707  471543 ssh_runner.go:195] Run: rm -f paused
	I1129 10:14:26.316272  471543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:14:26.316917  471543 kapi.go:59] client config for pause-377932: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/profiles/pause-377932/client.key", CAFile:"/home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 10:14:26.320131  471543 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzxhh" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:14:28.325273  471543 pod_ready.go:104] pod "coredns-66bc5c9577-dzxhh" is not "Ready", error: <nil>
	I1129 10:14:26.431327  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:26.431774  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:26.431865  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:26.431940  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:26.473480  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:26.473504  464519 cri.go:89] found id: ""
	I1129 10:14:26.473513  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:26.473569  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.478269  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:26.478341  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:26.520325  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:26.520346  464519 cri.go:89] found id: ""
	I1129 10:14:26.520354  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:26.520431  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.523902  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:26.524015  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:26.565113  464519 cri.go:89] found id: ""
	I1129 10:14:26.565145  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.565159  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:26.565166  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:26.565240  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:26.606214  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:26.606235  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:26.606240  464519 cri.go:89] found id: ""
	I1129 10:14:26.606247  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:26.606304  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.609935  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.613645  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:26.613721  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:26.650856  464519 cri.go:89] found id: ""
	I1129 10:14:26.650880  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.650888  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:26.650895  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:26.650959  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:26.692642  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:26.692665  464519 cri.go:89] found id: ""
	I1129 10:14:26.692674  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:26.692757  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:26.696294  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:26.696407  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:26.733135  464519 cri.go:89] found id: ""
	I1129 10:14:26.733160  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.733169  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:26.733175  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:26.733264  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:26.791510  464519 cri.go:89] found id: ""
	I1129 10:14:26.791535  464519 logs.go:282] 0 containers: []
	W1129 10:14:26.791544  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:26.791559  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:26.791571  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:26.894460  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:26.894496  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:26.945773  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:26.945803  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:26.991247  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:26.991273  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:27.060607  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:27.060646  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:27.107741  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:27.107768  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:27.227222  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:27.227266  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:27.246941  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:27.246967  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:27.336139  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:27.336162  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:27.336178  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:27.390648  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:27.390674  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:29.937339  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:29.937793  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:29.937857  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:29.937931  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:29.975956  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:29.975978  464519 cri.go:89] found id: ""
	I1129 10:14:29.975987  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:29.976043  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:29.979718  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:29.979795  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:30.069364  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:30.069385  464519 cri.go:89] found id: ""
	I1129 10:14:30.069394  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:30.069465  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.078531  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:30.078661  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:30.123654  464519 cri.go:89] found id: ""
	I1129 10:14:30.123679  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.123688  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:30.123695  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:30.123760  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:30.163132  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:30.163160  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:30.163165  464519 cri.go:89] found id: ""
	I1129 10:14:30.163173  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:30.163231  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.167270  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.170950  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:30.171052  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:30.209863  464519 cri.go:89] found id: ""
	I1129 10:14:30.209889  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.209898  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:30.209905  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:30.209965  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:30.249566  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:30.249588  464519 cri.go:89] found id: ""
	I1129 10:14:30.249597  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:30.249655  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:30.253459  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:30.253533  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:30.292349  464519 cri.go:89] found id: ""
	I1129 10:14:30.292386  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.292395  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:30.292418  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:30.292497  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:30.335757  464519 cri.go:89] found id: ""
	I1129 10:14:30.335787  464519 logs.go:282] 0 containers: []
	W1129 10:14:30.335796  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:30.335811  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:30.335824  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:30.389964  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:30.389997  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:30.430495  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:30.430524  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:30.494462  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:30.494500  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:30.540485  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:30.540516  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:30.326812  471543 pod_ready.go:104] pod "coredns-66bc5c9577-dzxhh" is not "Ready", error: <nil>
	I1129 10:14:31.827280  471543 pod_ready.go:94] pod "coredns-66bc5c9577-dzxhh" is "Ready"
	I1129 10:14:31.827311  471543 pod_ready.go:86] duration metric: took 5.507154696s for pod "coredns-66bc5c9577-dzxhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:31.833398  471543 pod_ready.go:83] waiting for pod "etcd-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:14:33.841078  471543 pod_ready.go:104] pod "etcd-pause-377932" is not "Ready", error: <nil>
	I1129 10:14:34.840294  471543 pod_ready.go:94] pod "etcd-pause-377932" is "Ready"
	I1129 10:14:34.840326  471543 pod_ready.go:86] duration metric: took 3.006891445s for pod "etcd-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:34.842885  471543 pod_ready.go:83] waiting for pod "kube-apiserver-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:14:30.616098  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:30.616117  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:30.616131  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:30.694878  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:30.694918  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:30.734373  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:30.734404  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:30.868463  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:30.868510  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:30.886980  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:30.887012  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:33.432443  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:33.432868  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:33.432915  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:33.432974  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:33.470250  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:33.470270  464519 cri.go:89] found id: ""
	I1129 10:14:33.470278  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:33.470337  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.473962  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:33.474036  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:33.515016  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:33.515039  464519 cri.go:89] found id: ""
	I1129 10:14:33.515047  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:33.515100  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.518732  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:33.518804  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:33.557911  464519 cri.go:89] found id: ""
	I1129 10:14:33.557939  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.557958  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:33.557965  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:33.558043  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:33.608145  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:33.608166  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:33.608171  464519 cri.go:89] found id: ""
	I1129 10:14:33.608178  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:33.608233  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.611828  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.615424  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:33.615503  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:33.655729  464519 cri.go:89] found id: ""
	I1129 10:14:33.655755  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.655764  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:33.655771  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:33.655832  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:33.695385  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:33.695456  464519 cri.go:89] found id: ""
	I1129 10:14:33.695479  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:33.695564  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:33.699100  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:33.699180  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:33.737669  464519 cri.go:89] found id: ""
	I1129 10:14:33.737732  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.737754  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:33.737773  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:33.737855  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:33.782943  464519 cri.go:89] found id: ""
	I1129 10:14:33.783020  464519 logs.go:282] 0 containers: []
	W1129 10:14:33.783044  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:33.783072  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:33.783113  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:33.863899  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:33.863933  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:33.863954  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:33.907348  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:33.907384  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:33.986674  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:33.986711  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:34.026252  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:34.026293  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:34.064257  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:34.064286  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:34.127502  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:34.127539  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:34.173016  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:34.173045  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:34.290514  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:34.290550  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:34.309122  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:34.309153  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:36.848671  471543 pod_ready.go:94] pod "kube-apiserver-pause-377932" is "Ready"
	I1129 10:14:36.848707  471543 pod_ready.go:86] duration metric: took 2.00579282s for pod "kube-apiserver-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.851563  471543 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.856696  471543 pod_ready.go:94] pod "kube-controller-manager-pause-377932" is "Ready"
	I1129 10:14:36.856723  471543 pod_ready.go:86] duration metric: took 5.129213ms for pod "kube-controller-manager-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.859188  471543 pod_ready.go:83] waiting for pod "kube-proxy-5tg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.864043  471543 pod_ready.go:94] pod "kube-proxy-5tg9h" is "Ready"
	I1129 10:14:36.864076  471543 pod_ready.go:86] duration metric: took 4.859129ms for pod "kube-proxy-5tg9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:36.866768  471543 pod_ready.go:83] waiting for pod "kube-scheduler-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:37.237857  471543 pod_ready.go:94] pod "kube-scheduler-pause-377932" is "Ready"
	I1129 10:14:37.237889  471543 pod_ready.go:86] duration metric: took 371.091872ms for pod "kube-scheduler-pause-377932" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:14:37.237902  471543 pod_ready.go:40] duration metric: took 10.921600408s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:14:37.312513  471543 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:14:37.315675  471543 out.go:179] * Done! kubectl is now configured to use "pause-377932" cluster and "default" namespace by default
	I1129 10:14:36.868912  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:36.869347  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:36.869397  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:36.869454  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:36.909930  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:36.909954  464519 cri.go:89] found id: ""
	I1129 10:14:36.909962  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:36.910018  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:36.913803  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:36.913901  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 10:14:36.952209  464519 cri.go:89] found id: "e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:36.952228  464519 cri.go:89] found id: ""
	I1129 10:14:36.952237  464519 logs.go:282] 1 containers: [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181]
	I1129 10:14:36.952291  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:36.955894  464519 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 10:14:36.956014  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 10:14:36.994525  464519 cri.go:89] found id: ""
	I1129 10:14:36.994552  464519 logs.go:282] 0 containers: []
	W1129 10:14:36.994561  464519 logs.go:284] No container was found matching "coredns"
	I1129 10:14:36.994567  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 10:14:36.994623  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 10:14:37.037828  464519 cri.go:89] found id: "c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:37.037859  464519 cri.go:89] found id: "270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:37.037866  464519 cri.go:89] found id: ""
	I1129 10:14:37.037873  464519 logs.go:282] 2 containers: [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9]
	I1129 10:14:37.037932  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:37.041741  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:37.045078  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 10:14:37.045149  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 10:14:37.083589  464519 cri.go:89] found id: ""
	I1129 10:14:37.083615  464519 logs.go:282] 0 containers: []
	W1129 10:14:37.083625  464519 logs.go:284] No container was found matching "kube-proxy"
	I1129 10:14:37.083632  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 10:14:37.083691  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 10:14:37.127387  464519 cri.go:89] found id: "d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:37.127410  464519 cri.go:89] found id: ""
	I1129 10:14:37.127422  464519 logs.go:282] 1 containers: [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b]
	I1129 10:14:37.127479  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:37.131233  464519 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 10:14:37.131329  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 10:14:37.169726  464519 cri.go:89] found id: ""
	I1129 10:14:37.169751  464519 logs.go:282] 0 containers: []
	W1129 10:14:37.169768  464519 logs.go:284] No container was found matching "kindnet"
	I1129 10:14:37.169776  464519 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 10:14:37.169842  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 10:14:37.207321  464519 cri.go:89] found id: ""
	I1129 10:14:37.207346  464519 logs.go:282] 0 containers: []
	W1129 10:14:37.207354  464519 logs.go:284] No container was found matching "storage-provisioner"
	I1129 10:14:37.207370  464519 logs.go:123] Gathering logs for kubelet ...
	I1129 10:14:37.207383  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 10:14:37.335493  464519 logs.go:123] Gathering logs for etcd [e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181] ...
	I1129 10:14:37.335570  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4ed828f957b884ce4e1c4e931bc386cae6d1ba97b8e2afc4d6647bf39a9c181"
	I1129 10:14:37.413921  464519 logs.go:123] Gathering logs for kube-scheduler [c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440] ...
	I1129 10:14:37.413960  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3da5a008b22d9f0debf6faf54a8512b34cc40334c49e73eed52b2c1cff7e440"
	I1129 10:14:37.522507  464519 logs.go:123] Gathering logs for kube-controller-manager [d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b] ...
	I1129 10:14:37.522546  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d54d9e64d875a32262093aeebc2dc3c7e128544cefc74ac3ce18649a3d602b"
	I1129 10:14:37.576088  464519 logs.go:123] Gathering logs for CRI-O ...
	I1129 10:14:37.576120  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 10:14:37.653602  464519 logs.go:123] Gathering logs for dmesg ...
	I1129 10:14:37.653641  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 10:14:37.672619  464519 logs.go:123] Gathering logs for describe nodes ...
	I1129 10:14:37.672648  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 10:14:37.773300  464519 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 10:14:37.773322  464519 logs.go:123] Gathering logs for kube-apiserver [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375] ...
	I1129 10:14:37.773334  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:37.834553  464519 logs.go:123] Gathering logs for kube-scheduler [270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9] ...
	I1129 10:14:37.834586  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 270952ff6ad1f3de27ade99f00bf5b2d9cd85b549b6b8f1d8715610d3ce69fd9"
	I1129 10:14:37.889407  464519 logs.go:123] Gathering logs for container status ...
	I1129 10:14:37.889437  464519 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 10:14:40.438198  464519 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:14:40.438726  464519 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 10:14:40.438774  464519 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 10:14:40.438831  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 10:14:40.490905  464519 cri.go:89] found id: "1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375"
	I1129 10:14:40.490924  464519 cri.go:89] found id: ""
	I1129 10:14:40.490940  464519 logs.go:282] 1 containers: [1ab0f968e27caeb411d3f09abb91eaca387ba716726097f3695b47bc802f6375]
	I1129 10:14:40.490995  464519 ssh_runner.go:195] Run: which crictl
	I1129 10:14:40.495352  464519 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 10:14:40.495419  464519 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	
	
	==> CRI-O <==
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.683753746Z" level=info msg="Started container" PID=2280 containerID=de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f description=kube-system/coredns-66bc5c9577-dzxhh/coredns id=0398bf7c-121a-4c32-a01c-2f25838761f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4852ecf0d91effe6c42e3c240b0963ba0026319e994acfc861f878a43636ecab
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.69692819Z" level=info msg="Created container dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52: kube-system/kube-apiserver-pause-377932/kube-apiserver" id=8cefdf61-5a18-47ea-8d96-f83450245163 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.734377137Z" level=info msg="Starting container: dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52" id=e3f28318-af9f-4ba7-ad00-1689e6a08032 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.741604078Z" level=info msg="Created container 90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260: kube-system/kube-controller-manager-pause-377932/kube-controller-manager" id=2355f528-61c5-4806-8a55-07c7ffcb2675 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.787035914Z" level=info msg="Starting container: 90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260" id=6d85e9e1-391c-4c27-8a9c-be67cd649947 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.793739207Z" level=info msg="Started container" PID=2307 containerID=dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52 description=kube-system/kube-apiserver-pause-377932/kube-apiserver id=e3f28318-af9f-4ba7-ad00-1689e6a08032 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed1ff32de33668b9a0aad827eea796c16aec16d37ef6aa8cb018a901253df67c
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.800211441Z" level=info msg="Started container" PID=2328 containerID=90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260 description=kube-system/kube-controller-manager-pause-377932/kube-controller-manager id=6d85e9e1-391c-4c27-8a9c-be67cd649947 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c58c3750328561ae211b495e26d51344cc65a1875511ece8b59a6584e4a5897d
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.851916821Z" level=info msg="Created container fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad: kube-system/etcd-pause-377932/etcd" id=50cf18a6-d2c7-4ed2-825f-57e0d3a5caf0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.864325645Z" level=info msg="Starting container: fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad" id=1c8b6db8-8610-4cc3-ba48-d1465be034fc name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:14:19 pause-377932 crio[2061]: time="2025-11-29T10:14:19.88748112Z" level=info msg="Started container" PID=2360 containerID=fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad description=kube-system/etcd-pause-377932/etcd id=1c8b6db8-8610-4cc3-ba48-d1465be034fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc0b6c9f1b458a547dbd7704d2012bf65e3d12879bbe00a870102d798b03b8a3
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.04702837Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.073804324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.073842971Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.073868555Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.081020632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.081204314Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.081301054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.085423314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.085613273Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.085695415Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.089742081Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.089916893Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.090005706Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.094854382Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:14:30 pause-377932 crio[2061]: time="2025-11-29T10:14:30.095023459Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	90b80185bedc6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   c58c375032856       kube-controller-manager-pause-377932   kube-system
	fed61c9e9742c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      1                   bc0b6c9f1b458       etcd-pause-377932                      kube-system
	dfefbc3422a69       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago       Running             kube-apiserver            1                   ed1ff32de3366       kube-apiserver-pause-377932            kube-system
	d4f345bc31ce0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   23 seconds ago       Running             kube-scheduler            1                   452a310a007d7       kube-scheduler-pause-377932            kube-system
	de83eb3c95e82       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   4852ecf0d91ef       coredns-66bc5c9577-dzxhh               kube-system
	8a6d93399096e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   c90a0729a6638       kindnet-8fr6g                          kube-system
	78219c83d3851       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   5d307e95d0dfa       kube-proxy-5tg9h                       kube-system
	863c9eb570a8e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   4852ecf0d91ef       coredns-66bc5c9577-dzxhh               kube-system
	22b7f675021b0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   c90a0729a6638       kindnet-8fr6g                          kube-system
	212c65a4a208b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   5d307e95d0dfa       kube-proxy-5tg9h                       kube-system
	7a3703bc7dce8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   452a310a007d7       kube-scheduler-pause-377932            kube-system
	d990448e5be14       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   c58c375032856       kube-controller-manager-pause-377932   kube-system
	b95158c79f68b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bc0b6c9f1b458       etcd-pause-377932                      kube-system
	7566fea0ee855       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   ed1ff32de3366       kube-apiserver-pause-377932            kube-system
	
	
	==> coredns [863c9eb570a8eedbd9cc558c29553aa219ca81df54cf5bacbb12a0581f16a6e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33892 - 58348 "HINFO IN 1428656686438777192.7857227502320497675. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023250327s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [de83eb3c95e82703638d04ec6c96dba17535571dd2819c2d00dc5bb12b6f0a1f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43488 - 15670 "HINFO IN 1690376347639942273.4027896821449090237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039049933s
	
	
	==> describe nodes <==
	Name:               pause-377932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-377932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=pause-377932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-377932
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:14:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:13:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:13:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:13:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:14:07 +0000   Sat, 29 Nov 2025 10:14:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-377932
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                edfa0727-0e2e-4553-9782-29a85f375619
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dzxhh                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-377932                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-8fr6g                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-377932             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-377932    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-5tg9h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-377932             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 75s   kube-proxy       
	  Normal   Starting                 18s   kube-proxy       
	  Normal   Starting                 82s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s   kubelet          Node pause-377932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s   kubelet          Node pause-377932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s   kubelet          Node pause-377932 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s   node-controller  Node pause-377932 event: Registered Node pause-377932 in Controller
	  Normal   NodeReady                36s   kubelet          Node pause-377932 status is now: NodeReady
	  Normal   RegisteredNode           15s   node-controller  Node pause-377932 event: Registered Node pause-377932 in Controller
	
	
	==> dmesg <==
	[Nov29 09:42] overlayfs: idmapped layers are currently not supported
	[Nov29 09:43] overlayfs: idmapped layers are currently not supported
	[Nov29 09:44] overlayfs: idmapped layers are currently not supported
	[  +2.899018] overlayfs: idmapped layers are currently not supported
	[ +47.632598] overlayfs: idmapped layers are currently not supported
	[Nov29 09:45] overlayfs: idmapped layers are currently not supported
	[Nov29 09:47] overlayfs: idmapped layers are currently not supported
	[Nov29 09:51] overlayfs: idmapped layers are currently not supported
	[Nov29 09:52] overlayfs: idmapped layers are currently not supported
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b95158c79f68b8cd69c81d30a1659ddc3402d6bec51b5fcdb3760a10ac2edba7] <==
	{"level":"warn","ts":"2025-11-29T10:13:17.202559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.218544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.236308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.269590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.287114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.299613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:13:17.395275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34508","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T10:14:11.497290Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-29T10:14:11.497336Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-377932","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-29T10:14:11.497433Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T10:14:11.497490Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T10:14:11.632799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T10:14:11.632951Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-11-29T10:14:11.632875Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T10:14:11.633008Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T10:14:11.633017Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-29T10:14:11.632934Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T10:14:11.633031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T10:14:11.633037Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T10:14:11.633068Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-29T10:14:11.633066Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-29T10:14:11.636360Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-29T10:14:11.636451Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T10:14:11.636493Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T10:14:11.636501Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-377932","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [fed61c9e9742c6414595d0362574493e656d3cfa2db02c3e82d96102424e92ad] <==
	{"level":"warn","ts":"2025-11-29T10:14:23.215328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.236008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.259043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.280206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.322162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.360884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.401999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.436261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.454406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.486222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.517090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.549893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.578651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.626146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.638400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.658311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.675894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.704691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.723900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.735331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.759977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.792208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.811079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.829719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:14:23.978442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44472","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:14:43 up  2:57,  0 user,  load average: 2.43, 2.41, 2.09
	Linux pause-377932 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22b7f675021b00c8e1d0af5fda6f17e6544084f6156ef90fc1fd28fee7ce6893] <==
	I1129 10:13:26.836212       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:13:26.837052       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:13:26.837175       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:13:26.837187       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:13:26.837201       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:13:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:13:27.046860       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:13:27.046934       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:13:27.046967       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:13:27.047306       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:13:57.040232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:13:57.047751       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:13:57.047751       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:13:57.047836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:13:58.547827       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:13:58.547895       1 metrics.go:72] Registering metrics
	I1129 10:13:58.547959       1 controller.go:711] "Syncing nftables rules"
	I1129 10:14:07.045813       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:14:07.045871       1 main.go:301] handling current node
	
	
	==> kindnet [8a6d93399096ef757e9b847c20ed94f2b6ccecc303206436c36a708244a27ce7] <==
	I1129 10:14:19.742003       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:14:19.755707       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:14:19.755884       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:14:19.755898       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:14:19.755930       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:14:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1129 10:14:20.035035       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1129 10:14:20.035707       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:14:20.035891       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:14:20.035937       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:14:20.036395       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:14:20.036861       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:14:20.037046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:14:20.037531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:14:25.036866       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:14:25.036909       1 metrics.go:72] Registering metrics
	I1129 10:14:25.036972       1 controller.go:711] "Syncing nftables rules"
	I1129 10:14:30.046573       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:14:30.046681       1 main.go:301] handling current node
	I1129 10:14:40.036419       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:14:40.036487       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e] <==
	W1129 10:14:11.520502       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520551       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520599       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520643       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520694       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520744       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520792       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520843       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520891       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520938       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.520988       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521037       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521083       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521357       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521423       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521476       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521542       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521596       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521669       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.521748       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.522940       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.522990       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.523163       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.523313       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 10:14:11.523489       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dfefbc3422a6947061bcebfb44ae3859abc0c9fd25a104d36834e245f96d2f52] <==
	I1129 10:14:24.914714       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 10:14:24.914742       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 10:14:24.915334       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:14:24.915547       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 10:14:24.915586       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:14:24.920137       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 10:14:24.922197       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 10:14:24.922700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 10:14:24.931152       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:14:24.932725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:14:24.932946       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:14:24.933801       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1129 10:14:24.935787       1 aggregator.go:171] initial CRD sync complete...
	I1129 10:14:24.935910       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 10:14:24.935943       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:14:24.935974       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:14:24.951762       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1129 10:14:24.963712       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:14:24.972792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:14:25.609136       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:14:26.907120       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:14:28.361604       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:14:28.412708       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:14:28.460862       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:14:28.563599       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [90b80185bedc647416bcb9889116fe2d1fba411fd510a87e9e9fc3b474d88260] <==
	I1129 10:14:28.181295       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:14:28.186603       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 10:14:28.187789       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 10:14:28.187887       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:14:28.189135       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 10:14:28.190289       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 10:14:28.199650       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:14:28.199674       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:14:28.199682       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:14:28.199757       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:14:28.199845       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:14:28.204192       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:14:28.204295       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:14:28.204314       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 10:14:28.204543       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 10:14:28.204980       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:14:28.205087       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:14:28.205260       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:14:28.205706       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-377932"
	I1129 10:14:28.205806       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 10:14:28.205895       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:14:28.206042       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 10:14:28.206751       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:14:28.210229       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:14:28.217536       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	
	
	==> kube-controller-manager [d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e] <==
	I1129 10:13:25.335166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-377932" podCIDRs=["10.244.0.0/24"]
	I1129 10:13:25.341199       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:13:25.347489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:13:25.361397       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:13:25.361409       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:13:25.361537       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:13:25.361548       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:13:25.361649       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:13:25.361744       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-377932"
	I1129 10:13:25.361772       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:13:25.361852       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 10:13:25.361913       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 10:13:25.363760       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 10:13:25.363810       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:13:25.363931       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:13:25.364371       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 10:13:25.364535       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 10:13:25.364608       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:13:25.364834       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 10:13:25.372362       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:13:25.372522       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:13:25.372551       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 10:13:25.375657       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:13:25.378266       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 10:14:10.369981       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [212c65a4a208be355d66db67ca2c344f88277834c18676d7489f207b156349de] <==
	I1129 10:13:26.899791       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:13:27.025729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:13:27.128553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:13:27.128595       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:13:27.128660       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:13:27.229490       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:13:27.229553       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:13:27.234600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:13:27.234933       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:13:27.234955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:13:27.243287       1 config.go:200] "Starting service config controller"
	I1129 10:13:27.243315       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:13:27.243345       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:13:27.243349       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:13:27.243359       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:13:27.243363       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:13:27.250736       1 config.go:309] "Starting node config controller"
	I1129 10:13:27.250754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:13:27.250762       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:13:27.345961       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:13:27.346102       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:13:27.346116       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [78219c83d385108b3bcc98070a36edbf0ddd7d6f73bb01e36ceca4997e32bb82] <==
	I1129 10:14:22.429475       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:14:23.531101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:14:25.034373       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:14:25.034456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:14:25.034553       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:14:25.103963       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:14:25.104463       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:14:25.113391       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:14:25.113764       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:14:25.113982       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:14:25.115254       1 config.go:200] "Starting service config controller"
	I1129 10:14:25.115331       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:14:25.122852       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:14:25.122933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:14:25.122994       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:14:25.123022       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:14:25.125927       1 config.go:309] "Starting node config controller"
	I1129 10:14:25.126025       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:14:25.126040       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:14:25.218172       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:14:25.224007       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:14:25.224104       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a3703bc7dce8c8ea16c9ac2c0b27d5dd1a62a540c2de8856ca64adf38f3253a] <==
	E1129 10:13:19.230408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:13:19.230467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 10:13:19.230531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:13:19.230583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 10:13:19.230642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 10:13:19.230679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:13:19.230708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:13:19.230742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:13:19.230778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:13:19.230842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:13:19.230888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:13:19.230934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:13:19.230980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:13:19.231026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:13:19.231075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 10:13:19.231166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 10:13:19.231217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 10:13:19.231294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1129 10:13:20.504266       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:11.501451       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1129 10:14:11.501472       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1129 10:14:11.501495       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1129 10:14:11.501517       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:11.501725       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1129 10:14:11.501740       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d4f345bc31ce087d3d5bebc97a5baf10e78b82949811116ad88e27b783ee07a5] <==
	I1129 10:14:23.705518       1 serving.go:386] Generated self-signed cert in-memory
	W1129 10:14:24.778335       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 10:14:24.778451       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:14:24.778486       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 10:14:24.778538       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 10:14:24.830308       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:14:24.830399       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:14:24.846858       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:14:24.847121       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:24.847176       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:14:24.847222       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:14:24.950319       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.405595    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8fr6g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1abc657-1bd4-43cc-860c-d23afb2e0cac" pod="kube-system/kindnet-8fr6g"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.405840    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-dzxhh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7f51682b-549d-403a-8927-01e86fc63f8b" pod="kube-system/coredns-66bc5c9577-dzxhh"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.406103    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="df00f14dc171d1b8ca1aed5155b9dc40" pod="kube-system/kube-scheduler-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: I1129 10:14:19.412761    1299 scope.go:117] "RemoveContainer" containerID="7566fea0ee855a7a0038d9fea5d50cfd8ae1cabe6dcca7ef64020bca4d84e69e"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.413364    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="8e15dae51d36677871c02c1439d311cf" pod="kube-system/kube-apiserver-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.413570    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tg9h\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d" pod="kube-system/kube-proxy-5tg9h"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.413780    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8fr6g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1abc657-1bd4-43cc-860c-d23afb2e0cac" pod="kube-system/kindnet-8fr6g"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.414016    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-dzxhh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7f51682b-549d-403a-8927-01e86fc63f8b" pod="kube-system/coredns-66bc5c9577-dzxhh"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.420264    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="df00f14dc171d1b8ca1aed5155b9dc40" pod="kube-system/kube-scheduler-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.420585    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1c453fe7929af19256abbd914af6971" pod="kube-system/etcd-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.430817    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-dzxhh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7f51682b-549d-403a-8927-01e86fc63f8b" pod="kube-system/coredns-66bc5c9577-dzxhh"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431080    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="df00f14dc171d1b8ca1aed5155b9dc40" pod="kube-system/kube-scheduler-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431366    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1c453fe7929af19256abbd914af6971" pod="kube-system/etcd-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431580    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="15c1a738221249b75177a6b68255993d" pod="kube-system/kube-controller-manager-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: I1129 10:14:19.431744    1299 scope.go:117] "RemoveContainer" containerID="d990448e5be1425673245cb9bde43560ae5da7f2042103b7f3a09b8c555ca95e"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.431984    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-377932\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="8e15dae51d36677871c02c1439d311cf" pod="kube-system/kube-apiserver-pause-377932"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.432481    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tg9h\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7dd2e4bd-122d-4728-ab8d-1a8d38ee7d6d" pod="kube-system/kube-proxy-5tg9h"
	Nov 29 10:14:19 pause-377932 kubelet[1299]: E1129 10:14:19.432755    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8fr6g\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a1abc657-1bd4-43cc-860c-d23afb2e0cac" pod="kube-system/kindnet-8fr6g"
	Nov 29 10:14:24 pause-377932 kubelet[1299]: E1129 10:14:24.880194    1299 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-377932\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-377932' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 29 10:14:24 pause-377932 kubelet[1299]: E1129 10:14:24.880943    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-377932\" is forbidden: User \"system:node:pause-377932\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-377932' and this object" podUID="8e15dae51d36677871c02c1439d311cf" pod="kube-system/kube-apiserver-pause-377932"
	Nov 29 10:14:24 pause-377932 kubelet[1299]: E1129 10:14:24.882335    1299 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-377932\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-377932' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 29 10:14:31 pause-377932 kubelet[1299]: W1129 10:14:31.372665    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 29 10:14:37 pause-377932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:14:37 pause-377932 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:14:37 pause-377932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-377932 -n pause-377932
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-377932 -n pause-377932: exit status 2 (358.741943ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-377932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (297.95724ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-685516 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-685516 describe deploy/metrics-server -n kube-system: exit status 1 (81.453154ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-685516 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-685516
helpers_test.go:243: (dbg) docker inspect old-k8s-version-685516:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda",
	        "Created": "2025-11-29T10:17:19.016539964Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:17:19.077910001Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/hostname",
	        "HostsPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/hosts",
	        "LogPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda-json.log",
	        "Name": "/old-k8s-version-685516",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-685516:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-685516",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda",
	                "LowerDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-685516",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-685516/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-685516",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-685516",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-685516",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52019274176cbddfebb4f1b3d654e8557b8a25d0eeaab52ca5b653bc05f7f973",
	            "SandboxKey": "/var/run/docker/netns/52019274176c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-685516": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:95:f8:1c:82:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef6362a531f73ce6ec3b16d1e169336b1eaf8a28a088fdb25af281248ccfdc3e",
	                    "EndpointID": "7f6b9659f1411fb7551d08fdfc640f5d485760473f7b911a376bad9697126a91",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-685516",
	                        "e87cb8cc4025"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-685516 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-685516 logs -n 25: (1.261001067s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-151203 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo containerd config dump                                                                                                                                                                                                  │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo crio config                                                                                                                                                                                                             │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ delete  │ -p cilium-151203                                                                                                                                                                                                                              │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:15 UTC │
	│ start   │ -p force-systemd-env-510051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p force-systemd-env-510051                                                                                                                                                                                                                   │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-930117   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p running-upgrade-493711                                                                                                                                                                                                                     │ running-upgrade-493711   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-033056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-033056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-033056                                                                                                                                                                                                                        │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:17:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:17:12.998468  489129 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:17:12.998659  489129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:17:12.998695  489129 out.go:374] Setting ErrFile to fd 2...
	I1129 10:17:12.998721  489129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:17:12.999007  489129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:17:12.999454  489129 out.go:368] Setting JSON to false
	I1129 10:17:13.000399  489129 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10782,"bootTime":1764400651,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:17:13.000498  489129 start.go:143] virtualization:  
	I1129 10:17:13.008802  489129 out.go:179] * [old-k8s-version-685516] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:17:13.012244  489129 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:17:13.012388  489129 notify.go:221] Checking for updates...
	I1129 10:17:13.018797  489129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:17:13.021713  489129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:17:13.024902  489129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:17:13.028077  489129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:17:13.031139  489129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:17:13.034829  489129 config.go:182] Loaded profile config "cert-expiration-930117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:17:13.034941  489129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:17:13.064821  489129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:17:13.064946  489129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:17:13.127260  489129 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:17:13.117305688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:17:13.127366  489129 docker.go:319] overlay module found
	I1129 10:17:13.130527  489129 out.go:179] * Using the docker driver based on user configuration
	I1129 10:17:13.133287  489129 start.go:309] selected driver: docker
	I1129 10:17:13.133304  489129 start.go:927] validating driver "docker" against <nil>
	I1129 10:17:13.133318  489129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:17:13.134099  489129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:17:13.193289  489129 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:17:13.183912115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:17:13.193443  489129 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 10:17:13.193671  489129 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:17:13.196741  489129 out.go:179] * Using Docker driver with root privileges
	I1129 10:17:13.199673  489129 cni.go:84] Creating CNI manager for ""
	I1129 10:17:13.199750  489129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:17:13.199766  489129 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 10:17:13.199855  489129 start.go:353] cluster config:
	{Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:17:13.203024  489129 out.go:179] * Starting "old-k8s-version-685516" primary control-plane node in "old-k8s-version-685516" cluster
	I1129 10:17:13.205924  489129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:17:13.208885  489129 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:17:13.211609  489129 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 10:17:13.211654  489129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1129 10:17:13.211665  489129 cache.go:65] Caching tarball of preloaded images
	I1129 10:17:13.211688  489129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:17:13.211748  489129 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:17:13.211758  489129 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1129 10:17:13.211879  489129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/config.json ...
	I1129 10:17:13.211904  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/config.json: {Name:mke58a65998ddd83b7de78cf27480539ef472438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:13.241439  489129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:17:13.241461  489129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:17:13.241479  489129 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:17:13.241510  489129 start.go:360] acquireMachinesLock for old-k8s-version-685516: {Name:mk7482d2fe027ea0120ebabcf8485e86c0be82ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:17:13.241664  489129 start.go:364] duration metric: took 130.734µs to acquireMachinesLock for "old-k8s-version-685516"
	I1129 10:17:13.241707  489129 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:17:13.241786  489129 start.go:125] createHost starting for "" (driver="docker")
	I1129 10:17:13.245211  489129 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 10:17:13.245456  489129 start.go:159] libmachine.API.Create for "old-k8s-version-685516" (driver="docker")
	I1129 10:17:13.245495  489129 client.go:173] LocalClient.Create starting
	I1129 10:17:13.245561  489129 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 10:17:13.245597  489129 main.go:143] libmachine: Decoding PEM data...
	I1129 10:17:13.245618  489129 main.go:143] libmachine: Parsing certificate...
	I1129 10:17:13.245683  489129 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 10:17:13.245711  489129 main.go:143] libmachine: Decoding PEM data...
	I1129 10:17:13.245727  489129 main.go:143] libmachine: Parsing certificate...
	I1129 10:17:13.246140  489129 cli_runner.go:164] Run: docker network inspect old-k8s-version-685516 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 10:17:13.262489  489129 cli_runner.go:211] docker network inspect old-k8s-version-685516 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 10:17:13.262627  489129 network_create.go:284] running [docker network inspect old-k8s-version-685516] to gather additional debugging logs...
	I1129 10:17:13.262658  489129 cli_runner.go:164] Run: docker network inspect old-k8s-version-685516
	W1129 10:17:13.278170  489129 cli_runner.go:211] docker network inspect old-k8s-version-685516 returned with exit code 1
	I1129 10:17:13.278201  489129 network_create.go:287] error running [docker network inspect old-k8s-version-685516]: docker network inspect old-k8s-version-685516: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-685516 not found
	I1129 10:17:13.278216  489129 network_create.go:289] output of [docker network inspect old-k8s-version-685516]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-685516 not found
	
	** /stderr **
	I1129 10:17:13.278368  489129 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:17:13.294707  489129 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
	I1129 10:17:13.295102  489129 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf66364546bb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1a:25:6d:94:37:dd} reservation:<nil>}
	I1129 10:17:13.295357  489129 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d78444b552f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:b1:d6:7c:04:eb} reservation:<nil>}
	I1129 10:17:13.295638  489129 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-da32c907c77f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:43:2d:60:a7:95} reservation:<nil>}
	I1129 10:17:13.296113  489129 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9c50}
	I1129 10:17:13.296159  489129 network_create.go:124] attempt to create docker network old-k8s-version-685516 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 10:17:13.296218  489129 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-685516 old-k8s-version-685516
	I1129 10:17:13.361453  489129 network_create.go:108] docker network old-k8s-version-685516 192.168.85.0/24 created
	I1129 10:17:13.361483  489129 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-685516" container
	I1129 10:17:13.361556  489129 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 10:17:13.380365  489129 cli_runner.go:164] Run: docker volume create old-k8s-version-685516 --label name.minikube.sigs.k8s.io=old-k8s-version-685516 --label created_by.minikube.sigs.k8s.io=true
	I1129 10:17:13.397723  489129 oci.go:103] Successfully created a docker volume old-k8s-version-685516
	I1129 10:17:13.397815  489129 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-685516-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-685516 --entrypoint /usr/bin/test -v old-k8s-version-685516:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 10:17:13.963219  489129 oci.go:107] Successfully prepared a docker volume old-k8s-version-685516
	I1129 10:17:13.963287  489129 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 10:17:13.963297  489129 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 10:17:13.963365  489129 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-685516:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 10:17:18.945374  489129 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-685516:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.981957352s)
	I1129 10:17:18.945407  489129 kic.go:203] duration metric: took 4.982106917s to extract preloaded images to volume ...
	W1129 10:17:18.945561  489129 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 10:17:18.945687  489129 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 10:17:19.000345  489129 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-685516 --name old-k8s-version-685516 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-685516 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-685516 --network old-k8s-version-685516 --ip 192.168.85.2 --volume old-k8s-version-685516:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 10:17:19.305793  489129 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Running}}
	I1129 10:17:19.335430  489129 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:17:19.359113  489129 cli_runner.go:164] Run: docker exec old-k8s-version-685516 stat /var/lib/dpkg/alternatives/iptables
	I1129 10:17:19.417469  489129 oci.go:144] the created container "old-k8s-version-685516" has a running status.
	I1129 10:17:19.417499  489129 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa...
	I1129 10:17:19.857651  489129 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 10:17:19.900865  489129 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:17:19.927400  489129 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 10:17:19.927421  489129 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-685516 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 10:17:19.994345  489129 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:17:20.020551  489129 machine.go:94] provisionDockerMachine start ...
	I1129 10:17:20.020657  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:20.050438  489129 main.go:143] libmachine: Using SSH client type: native
	I1129 10:17:20.050777  489129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1129 10:17:20.050792  489129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:17:20.051481  489129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52846->127.0.0.1:33421: read: connection reset by peer
	I1129 10:17:23.205708  489129 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-685516
	
	I1129 10:17:23.205737  489129 ubuntu.go:182] provisioning hostname "old-k8s-version-685516"
	I1129 10:17:23.205840  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:23.224911  489129 main.go:143] libmachine: Using SSH client type: native
	I1129 10:17:23.225252  489129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1129 10:17:23.225270  489129 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-685516 && echo "old-k8s-version-685516" | sudo tee /etc/hostname
	I1129 10:17:23.387732  489129 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-685516
	
	I1129 10:17:23.387812  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:23.409675  489129 main.go:143] libmachine: Using SSH client type: native
	I1129 10:17:23.409991  489129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1129 10:17:23.410012  489129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-685516' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-685516/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-685516' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:17:23.570561  489129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:17:23.570632  489129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:17:23.570686  489129 ubuntu.go:190] setting up certificates
	I1129 10:17:23.570714  489129 provision.go:84] configureAuth start
	I1129 10:17:23.570793  489129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:17:23.587817  489129 provision.go:143] copyHostCerts
	I1129 10:17:23.587888  489129 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:17:23.587903  489129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:17:23.587985  489129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:17:23.588091  489129 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:17:23.588103  489129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:17:23.588130  489129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:17:23.588201  489129 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:17:23.588209  489129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:17:23.588240  489129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:17:23.588319  489129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-685516 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-685516]
	I1129 10:17:23.988390  489129 provision.go:177] copyRemoteCerts
	I1129 10:17:23.988464  489129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:17:23.988507  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:24.011552  489129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:17:24.117880  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:17:24.136365  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 10:17:24.156033  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:17:24.174605  489129 provision.go:87] duration metric: took 603.852116ms to configureAuth
	I1129 10:17:24.174640  489129 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:17:24.174839  489129 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:17:24.174953  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:24.192944  489129 main.go:143] libmachine: Using SSH client type: native
	I1129 10:17:24.193286  489129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1129 10:17:24.193308  489129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:17:24.510835  489129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:17:24.510901  489129 machine.go:97] duration metric: took 4.490328953s to provisionDockerMachine
	I1129 10:17:24.510926  489129 client.go:176] duration metric: took 11.265420368s to LocalClient.Create
	I1129 10:17:24.510976  489129 start.go:167] duration metric: took 11.265509288s to libmachine.API.Create "old-k8s-version-685516"
	I1129 10:17:24.511003  489129 start.go:293] postStartSetup for "old-k8s-version-685516" (driver="docker")
	I1129 10:17:24.511027  489129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:17:24.511120  489129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:17:24.511188  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:24.536869  489129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:17:24.642385  489129 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:17:24.645694  489129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:17:24.645726  489129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:17:24.645737  489129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:17:24.645798  489129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:17:24.645894  489129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:17:24.646003  489129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:17:24.653831  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:17:24.673970  489129 start.go:296] duration metric: took 162.939881ms for postStartSetup
	I1129 10:17:24.674435  489129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:17:24.694572  489129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/config.json ...
	I1129 10:17:24.694868  489129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:17:24.694924  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:24.712038  489129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:17:24.815308  489129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:17:24.820387  489129 start.go:128] duration metric: took 11.578584482s to createHost
	I1129 10:17:24.820414  489129 start.go:83] releasing machines lock for "old-k8s-version-685516", held for 11.578732684s
	I1129 10:17:24.820491  489129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:17:24.838651  489129 ssh_runner.go:195] Run: cat /version.json
	I1129 10:17:24.838703  489129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:17:24.838710  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:24.838772  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:24.864536  489129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:17:24.874492  489129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:17:25.058952  489129 ssh_runner.go:195] Run: systemctl --version
	I1129 10:17:25.065512  489129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:17:25.110534  489129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:17:25.115012  489129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:17:25.115116  489129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:17:25.144601  489129 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 10:17:25.144677  489129 start.go:496] detecting cgroup driver to use...
	I1129 10:17:25.144742  489129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:17:25.144830  489129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:17:25.163399  489129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:17:25.177513  489129 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:17:25.177606  489129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:17:25.195761  489129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:17:25.219239  489129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:17:25.347790  489129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:17:25.475870  489129 docker.go:234] disabling docker service ...
	I1129 10:17:25.475982  489129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:17:25.497168  489129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:17:25.511013  489129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:17:25.639454  489129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:17:25.770230  489129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:17:25.784049  489129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:17:25.799499  489129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1129 10:17:25.799566  489129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:17:25.808458  489129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:17:25.808557  489129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:17:25.817617  489129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:17:25.826867  489129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:17:25.835884  489129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:17:25.844230  489129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:17:25.853285  489129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:17:25.867037  489129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:17:25.875801  489129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:17:25.883750  489129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:17:25.891769  489129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:17:26.019905  489129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:17:26.200497  489129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:17:26.200625  489129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:17:26.204356  489129 start.go:564] Will wait 60s for crictl version
	I1129 10:17:26.204449  489129 ssh_runner.go:195] Run: which crictl
	I1129 10:17:26.208082  489129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:17:26.233698  489129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:17:26.233854  489129 ssh_runner.go:195] Run: crio --version
	I1129 10:17:26.262254  489129 ssh_runner.go:195] Run: crio --version
	I1129 10:17:26.296305  489129 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1129 10:17:26.299180  489129 cli_runner.go:164] Run: docker network inspect old-k8s-version-685516 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:17:26.320056  489129 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 10:17:26.325301  489129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:17:26.335545  489129 kubeadm.go:884] updating cluster {Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:17:26.335687  489129 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 10:17:26.335761  489129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:17:26.368386  489129 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:17:26.368413  489129 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:17:26.368470  489129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:17:26.393654  489129 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:17:26.393685  489129 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:17:26.393693  489129 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1129 10:17:26.393819  489129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-685516 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:17:26.393934  489129 ssh_runner.go:195] Run: crio config
	I1129 10:17:26.472519  489129 cni.go:84] Creating CNI manager for ""
	I1129 10:17:26.472545  489129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:17:26.472602  489129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:17:26.472634  489129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-685516 NodeName:old-k8s-version-685516 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:17:26.472781  489129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-685516"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:17:26.472855  489129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 10:17:26.480446  489129 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:17:26.480543  489129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:17:26.488387  489129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1129 10:17:26.501666  489129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:17:26.516006  489129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1129 10:17:26.533529  489129 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:17:26.537099  489129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:17:26.546961  489129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:17:26.683710  489129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:17:26.701686  489129 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516 for IP: 192.168.85.2
	I1129 10:17:26.701709  489129 certs.go:195] generating shared ca certs ...
	I1129 10:17:26.701725  489129 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:26.701879  489129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:17:26.701932  489129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:17:26.701944  489129 certs.go:257] generating profile certs ...
	I1129 10:17:26.702001  489129 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.key
	I1129 10:17:26.702021  489129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt with IP's: []
	I1129 10:17:26.974515  489129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt ...
	I1129 10:17:26.974549  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: {Name:mk3a7e474583a96d0128ae73a09ee823e106e97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:26.974745  489129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.key ...
	I1129 10:17:26.974762  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.key: {Name:mkcb23596b84d240eeeb8e137a438aa21e2523a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:26.974862  489129 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key.a7d871e6
	I1129 10:17:26.974883  489129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt.a7d871e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 10:17:27.215102  489129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt.a7d871e6 ...
	I1129 10:17:27.215135  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt.a7d871e6: {Name:mk179015d447965689f4e1819c89ba75377e4f3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:27.215335  489129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key.a7d871e6 ...
	I1129 10:17:27.215351  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key.a7d871e6: {Name:mk71fc8ef4700291e9c06d6ad17e591b41e6e4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:27.215441  489129 certs.go:382] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt.a7d871e6 -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt
	I1129 10:17:27.215517  489129 certs.go:386] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key.a7d871e6 -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key
	I1129 10:17:27.215580  489129 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key
	I1129 10:17:27.215597  489129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.crt with IP's: []
	I1129 10:17:27.575918  489129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.crt ...
	I1129 10:17:27.575950  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.crt: {Name:mkddfaac8cac6c3ed254a4c5413f324e232cdbd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:27.576141  489129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key ...
	I1129 10:17:27.576190  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key: {Name:mke122d9a7b6459e041439913297cdb3321f2b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:27.576396  489129 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:17:27.576444  489129 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:17:27.576453  489129 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:17:27.576482  489129 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:17:27.576512  489129 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:17:27.576554  489129 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:17:27.576606  489129 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:17:27.577222  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:17:27.597524  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:17:27.619526  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:17:27.642984  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:17:27.660950  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 10:17:27.679172  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:17:27.697437  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:17:27.716276  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:17:27.734255  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:17:27.753000  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:17:27.770804  489129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:17:27.788745  489129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:17:27.802764  489129 ssh_runner.go:195] Run: openssl version
	I1129 10:17:27.808933  489129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:17:27.817418  489129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:17:27.821257  489129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:17:27.821345  489129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:17:27.867990  489129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:17:27.876589  489129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:17:27.885047  489129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:17:27.888857  489129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:17:27.888978  489129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:17:27.930547  489129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:17:27.939123  489129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:17:27.948133  489129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:17:27.951847  489129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:17:27.951946  489129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:17:27.993252  489129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:17:28.011243  489129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:17:28.015874  489129 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 10:17:28.015983  489129 kubeadm.go:401] StartCluster: {Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:17:28.016080  489129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:17:28.016140  489129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:17:28.049843  489129 cri.go:89] found id: ""
	I1129 10:17:28.049918  489129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:17:28.058017  489129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 10:17:28.066235  489129 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 10:17:28.066324  489129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 10:17:28.074327  489129 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 10:17:28.074347  489129 kubeadm.go:158] found existing configuration files:
	
	I1129 10:17:28.074417  489129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 10:17:28.082634  489129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 10:17:28.082729  489129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 10:17:28.090900  489129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 10:17:28.099066  489129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 10:17:28.099164  489129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 10:17:28.106727  489129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 10:17:28.114425  489129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 10:17:28.114514  489129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 10:17:28.121910  489129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 10:17:28.129671  489129 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 10:17:28.129757  489129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 10:17:28.137429  489129 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 10:17:28.245550  489129 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:17:28.331068  489129 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:17:43.715850  489129 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1129 10:17:43.715915  489129 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 10:17:43.716003  489129 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 10:17:43.716060  489129 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 10:17:43.716100  489129 kubeadm.go:319] OS: Linux
	I1129 10:17:43.716155  489129 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 10:17:43.716208  489129 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 10:17:43.716264  489129 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 10:17:43.716316  489129 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 10:17:43.716368  489129 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 10:17:43.716419  489129 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 10:17:43.716468  489129 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 10:17:43.716519  489129 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 10:17:43.716568  489129 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 10:17:43.716646  489129 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 10:17:43.716744  489129 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 10:17:43.716842  489129 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1129 10:17:43.716908  489129 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 10:17:43.719948  489129 out.go:252]   - Generating certificates and keys ...
	I1129 10:17:43.720058  489129 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 10:17:43.720145  489129 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 10:17:43.720233  489129 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 10:17:43.720307  489129 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 10:17:43.720391  489129 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 10:17:43.720461  489129 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 10:17:43.720528  489129 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 10:17:43.720700  489129 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-685516] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 10:17:43.720766  489129 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 10:17:43.720902  489129 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-685516] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 10:17:43.720976  489129 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 10:17:43.721044  489129 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 10:17:43.721095  489129 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:17:43.721155  489129 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:17:43.721211  489129 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:17:43.721268  489129 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:17:43.721348  489129 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:17:43.721407  489129 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:17:43.721494  489129 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:17:43.721566  489129 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 10:17:43.724689  489129 out.go:252]   - Booting up control plane ...
	I1129 10:17:43.724803  489129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:17:43.724890  489129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:17:43.724964  489129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:17:43.725083  489129 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:17:43.725174  489129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:17:43.725220  489129 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:17:43.725380  489129 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1129 10:17:43.725462  489129 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.003627 seconds
	I1129 10:17:43.725574  489129 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:17:43.725708  489129 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:17:43.725770  489129 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:17:43.725966  489129 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-685516 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:17:43.726027  489129 kubeadm.go:319] [bootstrap-token] Using token: if2wmw.1kogoey2srfvjcxf
	I1129 10:17:43.729090  489129 out.go:252]   - Configuring RBAC rules ...
	I1129 10:17:43.729228  489129 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:17:43.729336  489129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:17:43.729503  489129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:17:43.729651  489129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:17:43.729805  489129 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:17:43.729917  489129 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:17:43.730047  489129 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:17:43.730235  489129 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:17:43.730300  489129 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:17:43.730310  489129 kubeadm.go:319] 
	I1129 10:17:43.730375  489129 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:17:43.730383  489129 kubeadm.go:319] 
	I1129 10:17:43.730466  489129 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:17:43.730475  489129 kubeadm.go:319] 
	I1129 10:17:43.730502  489129 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:17:43.730569  489129 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:17:43.730631  489129 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:17:43.730640  489129 kubeadm.go:319] 
	I1129 10:17:43.730699  489129 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:17:43.730707  489129 kubeadm.go:319] 
	I1129 10:17:43.730757  489129 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:17:43.730765  489129 kubeadm.go:319] 
	I1129 10:17:43.730822  489129 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:17:43.730906  489129 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:17:43.730984  489129 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:17:43.730992  489129 kubeadm.go:319] 
	I1129 10:17:43.731082  489129 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:17:43.731168  489129 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:17:43.731175  489129 kubeadm.go:319] 
	I1129 10:17:43.731265  489129 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token if2wmw.1kogoey2srfvjcxf \
	I1129 10:17:43.731380  489129 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:17:43.731406  489129 kubeadm.go:319] 	--control-plane 
	I1129 10:17:43.731414  489129 kubeadm.go:319] 
	I1129 10:17:43.731506  489129 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:17:43.731515  489129 kubeadm.go:319] 
	I1129 10:17:43.731604  489129 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token if2wmw.1kogoey2srfvjcxf \
	I1129 10:17:43.731719  489129 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:17:43.731745  489129 cni.go:84] Creating CNI manager for ""
	I1129 10:17:43.731757  489129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:17:43.736706  489129 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 10:17:43.739575  489129 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:17:43.745448  489129 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1129 10:17:43.745469  489129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:17:43.788023  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:17:44.729111  489129 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:17:44.729199  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:44.729277  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-685516 minikube.k8s.io/updated_at=2025_11_29T10_17_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=old-k8s-version-685516 minikube.k8s.io/primary=true
	I1129 10:17:44.880516  489129 ops.go:34] apiserver oom_adj: -16
	I1129 10:17:44.880648  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:45.381490  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:45.880831  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:46.380742  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:46.880855  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:47.381452  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:47.880784  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:48.380940  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:48.881377  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:49.381569  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:49.880760  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:50.381552  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:50.881296  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:51.380969  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:51.881636  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:52.380985  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:52.881015  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:53.380939  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:53.880801  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:54.381432  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:54.881037  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:55.380767  489129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:17:55.488404  489129 kubeadm.go:1114] duration metric: took 10.759257989s to wait for elevateKubeSystemPrivileges
	I1129 10:17:55.488442  489129 kubeadm.go:403] duration metric: took 27.472463778s to StartCluster
	I1129 10:17:55.488461  489129 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:55.488534  489129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:17:55.489447  489129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:17:55.489676  489129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:17:55.489822  489129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:17:55.490108  489129 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:17:55.490154  489129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:17:55.490216  489129 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-685516"
	I1129 10:17:55.490239  489129 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-685516"
	I1129 10:17:55.490274  489129 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:17:55.491055  489129 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:17:55.491382  489129 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-685516"
	I1129 10:17:55.491435  489129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-685516"
	I1129 10:17:55.491752  489129 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:17:55.493732  489129 out.go:179] * Verifying Kubernetes components...
	I1129 10:17:55.505318  489129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:17:55.528906  489129 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-685516"
	I1129 10:17:55.528945  489129 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:17:55.529359  489129 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:17:55.541670  489129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:17:55.544524  489129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:17:55.544548  489129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:17:55.544612  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:55.570842  489129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:17:55.570863  489129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:17:55.570946  489129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:17:55.579632  489129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:17:55.606568  489129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:17:55.856897  489129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:17:55.857070  489129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:17:55.913228  489129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:17:55.958537  489129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:17:56.447045  489129 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1129 10:17:56.447946  489129 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-685516" to be "Ready" ...
	I1129 10:17:56.799196  489129 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1129 10:17:56.802144  489129 addons.go:530] duration metric: took 1.311978944s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1129 10:17:56.952726  489129 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-685516" context rescaled to 1 replicas
	W1129 10:17:58.450956  489129 node_ready.go:57] node "old-k8s-version-685516" has "Ready":"False" status (will retry)
	W1129 10:18:00.452147  489129 node_ready.go:57] node "old-k8s-version-685516" has "Ready":"False" status (will retry)
	W1129 10:18:02.453562  489129 node_ready.go:57] node "old-k8s-version-685516" has "Ready":"False" status (will retry)
	W1129 10:18:04.951553  489129 node_ready.go:57] node "old-k8s-version-685516" has "Ready":"False" status (will retry)
	W1129 10:18:07.451068  489129 node_ready.go:57] node "old-k8s-version-685516" has "Ready":"False" status (will retry)
	W1129 10:18:09.451593  489129 node_ready.go:57] node "old-k8s-version-685516" has "Ready":"False" status (will retry)
	I1129 10:18:11.450878  489129 node_ready.go:49] node "old-k8s-version-685516" is "Ready"
	I1129 10:18:11.450916  489129 node_ready.go:38] duration metric: took 15.002945141s for node "old-k8s-version-685516" to be "Ready" ...
	I1129 10:18:11.450931  489129 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:18:11.450993  489129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:18:11.473651  489129 api_server.go:72] duration metric: took 15.983946803s to wait for apiserver process to appear ...
	I1129 10:18:11.473675  489129 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:18:11.473693  489129 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:18:11.483699  489129 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 10:18:11.485658  489129 api_server.go:141] control plane version: v1.28.0
	I1129 10:18:11.485739  489129 api_server.go:131] duration metric: took 12.055932ms to wait for apiserver health ...
	I1129 10:18:11.485771  489129 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:18:11.495432  489129 system_pods.go:59] 8 kube-system pods found
	I1129 10:18:11.495480  489129 system_pods.go:61] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:11.495488  489129 system_pods.go:61] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running
	I1129 10:18:11.495539  489129 system_pods.go:61] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running
	I1129 10:18:11.495552  489129 system_pods.go:61] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running
	I1129 10:18:11.495558  489129 system_pods.go:61] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running
	I1129 10:18:11.495562  489129 system_pods.go:61] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running
	I1129 10:18:11.495566  489129 system_pods.go:61] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running
	I1129 10:18:11.495576  489129 system_pods.go:61] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:18:11.495582  489129 system_pods.go:74] duration metric: took 9.799389ms to wait for pod list to return data ...
	I1129 10:18:11.495609  489129 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:18:11.505424  489129 default_sa.go:45] found service account: "default"
	I1129 10:18:11.505448  489129 default_sa.go:55] duration metric: took 9.821724ms for default service account to be created ...
	I1129 10:18:11.505459  489129 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:18:11.509236  489129 system_pods.go:86] 8 kube-system pods found
	I1129 10:18:11.509271  489129 system_pods.go:89] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:11.509278  489129 system_pods.go:89] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running
	I1129 10:18:11.509284  489129 system_pods.go:89] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running
	I1129 10:18:11.509289  489129 system_pods.go:89] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running
	I1129 10:18:11.509293  489129 system_pods.go:89] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running
	I1129 10:18:11.509297  489129 system_pods.go:89] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running
	I1129 10:18:11.509301  489129 system_pods.go:89] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running
	I1129 10:18:11.509307  489129 system_pods.go:89] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:18:11.509344  489129 retry.go:31] will retry after 294.674112ms: missing components: kube-dns
	I1129 10:18:11.812408  489129 system_pods.go:86] 8 kube-system pods found
	I1129 10:18:11.812442  489129 system_pods.go:89] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:11.812449  489129 system_pods.go:89] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running
	I1129 10:18:11.812456  489129 system_pods.go:89] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running
	I1129 10:18:11.812460  489129 system_pods.go:89] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running
	I1129 10:18:11.812466  489129 system_pods.go:89] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running
	I1129 10:18:11.812470  489129 system_pods.go:89] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running
	I1129 10:18:11.812475  489129 system_pods.go:89] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running
	I1129 10:18:11.812481  489129 system_pods.go:89] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:18:11.812502  489129 retry.go:31] will retry after 277.568626ms: missing components: kube-dns
	I1129 10:18:12.094712  489129 system_pods.go:86] 8 kube-system pods found
	I1129 10:18:12.094748  489129 system_pods.go:89] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:12.094756  489129 system_pods.go:89] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running
	I1129 10:18:12.094763  489129 system_pods.go:89] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running
	I1129 10:18:12.094767  489129 system_pods.go:89] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running
	I1129 10:18:12.094772  489129 system_pods.go:89] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running
	I1129 10:18:12.094776  489129 system_pods.go:89] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running
	I1129 10:18:12.094780  489129 system_pods.go:89] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running
	I1129 10:18:12.094784  489129 system_pods.go:89] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Running
	I1129 10:18:12.094792  489129 system_pods.go:126] duration metric: took 589.327625ms to wait for k8s-apps to be running ...
	I1129 10:18:12.094806  489129 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:18:12.094867  489129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:18:12.109241  489129 system_svc.go:56] duration metric: took 14.426182ms WaitForService to wait for kubelet
	I1129 10:18:12.109270  489129 kubeadm.go:587] duration metric: took 16.619569766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:18:12.109292  489129 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:18:12.112587  489129 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:18:12.112662  489129 node_conditions.go:123] node cpu capacity is 2
	I1129 10:18:12.112690  489129 node_conditions.go:105] duration metric: took 3.392317ms to run NodePressure ...
	I1129 10:18:12.112738  489129 start.go:242] waiting for startup goroutines ...
	I1129 10:18:12.112767  489129 start.go:247] waiting for cluster config update ...
	I1129 10:18:12.112796  489129 start.go:256] writing updated cluster config ...
	I1129 10:18:12.113153  489129 ssh_runner.go:195] Run: rm -f paused
	I1129 10:18:12.116865  489129 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:18:12.122191  489129 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-tpdzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.128220  489129 pod_ready.go:94] pod "coredns-5dd5756b68-tpdzb" is "Ready"
	I1129 10:18:13.128248  489129 pod_ready.go:86] duration metric: took 1.006032409s for pod "coredns-5dd5756b68-tpdzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.131482  489129 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.143069  489129 pod_ready.go:94] pod "etcd-old-k8s-version-685516" is "Ready"
	I1129 10:18:13.143098  489129 pod_ready.go:86] duration metric: took 11.585658ms for pod "etcd-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.146027  489129 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.150900  489129 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-685516" is "Ready"
	I1129 10:18:13.150925  489129 pod_ready.go:86] duration metric: took 4.873524ms for pod "kube-apiserver-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.153936  489129 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.326529  489129 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-685516" is "Ready"
	I1129 10:18:13.326566  489129 pod_ready.go:86] duration metric: took 172.602619ms for pod "kube-controller-manager-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.527154  489129 pod_ready.go:83] waiting for pod "kube-proxy-lqwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:13.926565  489129 pod_ready.go:94] pod "kube-proxy-lqwmk" is "Ready"
	I1129 10:18:13.926593  489129 pod_ready.go:86] duration metric: took 399.413403ms for pod "kube-proxy-lqwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:14.127067  489129 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:14.526478  489129 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-685516" is "Ready"
	I1129 10:18:14.526507  489129 pod_ready.go:86] duration metric: took 399.41452ms for pod "kube-scheduler-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:18:14.526519  489129 pod_ready.go:40] duration metric: took 2.409577928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:18:14.582924  489129 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1129 10:18:14.585920  489129 out.go:203] 
	W1129 10:18:14.588821  489129 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 10:18:14.591750  489129 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 10:18:14.595331  489129 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-685516" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:18:11 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:11.878772722Z" level=info msg="Created container 4d0d4f9107843e76c5592741ae1ca2e79e94452d2e36e47e9d5a4347adbd70f3: kube-system/coredns-5dd5756b68-tpdzb/coredns" id=54fc3d19-060f-4456-bbb4-83a856626769 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:18:11 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:11.879687555Z" level=info msg="Starting container: 4d0d4f9107843e76c5592741ae1ca2e79e94452d2e36e47e9d5a4347adbd70f3" id=ce652992-75ce-49b1-b688-a17cf3e475ff name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:18:11 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:11.887058191Z" level=info msg="Started container" PID=1912 containerID=4d0d4f9107843e76c5592741ae1ca2e79e94452d2e36e47e9d5a4347adbd70f3 description=kube-system/coredns-5dd5756b68-tpdzb/coredns id=ce652992-75ce-49b1-b688-a17cf3e475ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=de65514ee4ca6854639e481293603c41e333804e4edd0577ea76fc5412bcf6a1
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.144863945Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2fdf11e8-4af7-47ef-8c9b-13ff71cc4411 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.14493913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.150297205Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2f590352d9a33c61d47f65212e3ef7904eb07de3d1fb1fe4c24b457152f41188 UID:50ae449b-ebf3-4617-bb6c-7e100cb4c66c NetNS:/var/run/netns/513c5dfc-49af-4ecf-9cb9-3018922bda71 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000f960d0}] Aliases:map[]}"
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.150473978Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.16134081Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2f590352d9a33c61d47f65212e3ef7904eb07de3d1fb1fe4c24b457152f41188 UID:50ae449b-ebf3-4617-bb6c-7e100cb4c66c NetNS:/var/run/netns/513c5dfc-49af-4ecf-9cb9-3018922bda71 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000f960d0}] Aliases:map[]}"
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.161486576Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.164031556Z" level=info msg="Ran pod sandbox 2f590352d9a33c61d47f65212e3ef7904eb07de3d1fb1fe4c24b457152f41188 with infra container: default/busybox/POD" id=2fdf11e8-4af7-47ef-8c9b-13ff71cc4411 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.16536232Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a1bbded-faa7-4bf1-93ab-13074fb58341 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.165508127Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3a1bbded-faa7-4bf1-93ab-13074fb58341 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.165563635Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3a1bbded-faa7-4bf1-93ab-13074fb58341 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.16919758Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f32a4528-b50e-461d-8ca2-b004ff850ff0 name=/runtime.v1.ImageService/PullImage
	Nov 29 10:18:15 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:15.171699328Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.122487613Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f32a4528-b50e-461d-8ca2-b004ff850ff0 name=/runtime.v1.ImageService/PullImage
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.123467547Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a72acb67-0175-4302-ba8c-39bc33fe8aec name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.124869097Z" level=info msg="Creating container: default/busybox/busybox" id=4d6a5a2e-8f37-4aab-9570-cfb504483870 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.124970866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.129818692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.130304982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.145141106Z" level=info msg="Created container 2100f0b3bca286e4fd2de0d7e63d35a364d5e12383d9f82b8225d6b3abadf81d: default/busybox/busybox" id=4d6a5a2e-8f37-4aab-9570-cfb504483870 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.145908868Z" level=info msg="Starting container: 2100f0b3bca286e4fd2de0d7e63d35a364d5e12383d9f82b8225d6b3abadf81d" id=36031035-b59d-47b0-b11b-2521c8efb5c8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:18:17 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:17.149877669Z" level=info msg="Started container" PID=1968 containerID=2100f0b3bca286e4fd2de0d7e63d35a364d5e12383d9f82b8225d6b3abadf81d description=default/busybox/busybox id=36031035-b59d-47b0-b11b-2521c8efb5c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f590352d9a33c61d47f65212e3ef7904eb07de3d1fb1fe4c24b457152f41188
	Nov 29 10:18:25 old-k8s-version-685516 crio[838]: time="2025-11-29T10:18:25.06680135Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	2100f0b3bca28       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   2f590352d9a33       busybox                                          default
	4d0d4f9107843       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago      Running             coredns                   0                   de65514ee4ca6       coredns-5dd5756b68-tpdzb                         kube-system
	98c562f37d9b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   1bbefd79d727f       storage-provisioner                              kube-system
	017c5a2b4fa07       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   9a56309fe5c9c       kindnet-kjgl5                                    kube-system
	84b1461b8655b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   da93354dd6342       kube-proxy-lqwmk                                 kube-system
	2f2ccf4b9af43       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      50 seconds ago      Running             kube-apiserver            0                   b2e4e3330b577       kube-apiserver-old-k8s-version-685516            kube-system
	56549fbe4e0a0       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      50 seconds ago      Running             kube-controller-manager   0                   b37d057de6828       kube-controller-manager-old-k8s-version-685516   kube-system
	dbc23587d8080       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      50 seconds ago      Running             etcd                      0                   6cc745e7f807b       etcd-old-k8s-version-685516                      kube-system
	b9b99bf7d2d32       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      50 seconds ago      Running             kube-scheduler            0                   7b2a1950a62e8       kube-scheduler-old-k8s-version-685516            kube-system
	
	
	==> coredns [4d0d4f9107843e76c5592741ae1ca2e79e94452d2e36e47e9d5a4347adbd70f3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35260 - 23978 "HINFO IN 665982835714285036.3324365243366994856. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021337113s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-685516
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-685516
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-685516
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_17_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:17:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-685516
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:18:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:18:14 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:18:14 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:18:14 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:18:14 +0000   Sat, 29 Nov 2025 10:18:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-685516
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0e7612a5-7b98-4dd8-91b7-663bc5a3b138
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-tpdzb                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-old-k8s-version-685516                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-kjgl5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-685516             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-685516    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-lqwmk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-685516             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node old-k8s-version-685516 event: Registered Node old-k8s-version-685516 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-685516 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 09:45] overlayfs: idmapped layers are currently not supported
	[Nov29 09:47] overlayfs: idmapped layers are currently not supported
	[Nov29 09:51] overlayfs: idmapped layers are currently not supported
	[Nov29 09:52] overlayfs: idmapped layers are currently not supported
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dbc23587d8080434780840eed28b5d2fffa9802a862b1b5e7c4c607cdf1fc8fc] <==
	{"level":"info","ts":"2025-11-29T10:17:36.01846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-29T10:17:36.018585Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-29T10:17:36.026243Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-29T10:17:36.026385Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-29T10:17:36.026539Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-29T10:17:36.027206Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T10:17:36.027282Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T10:17:36.553563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-29T10:17:36.553616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-29T10:17:36.553634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-29T10:17:36.553647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-29T10:17:36.553653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-29T10:17:36.553664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-29T10:17:36.553672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-29T10:17:36.562213Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:17:36.570239Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:17:36.570378Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:17:36.570428Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:17:36.570473Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-685516 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T10:17:36.570514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T10:17:36.571554Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T10:17:36.574508Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T10:17:36.578845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-29T10:17:36.574557Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T10:17:36.591774Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:18:26 up  3:00,  0 user,  load average: 2.00, 2.64, 2.28
	Linux old-k8s-version-685516 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [017c5a2b4fa0774fb700b128871527323d92ed78679088c788fb44009bb18c02] <==
	I1129 10:18:00.721524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:18:00.721822       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:18:00.721962       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:18:00.721979       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:18:00.721989       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:18:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:18:00.925237       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:18:00.925323       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:18:00.925359       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:18:01.014337       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 10:18:01.214169       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:18:01.214204       1 metrics.go:72] Registering metrics
	I1129 10:18:01.214315       1 controller.go:711] "Syncing nftables rules"
	I1129 10:18:10.930129       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:18:10.930186       1 main.go:301] handling current node
	I1129 10:18:20.926162       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:18:20.926201       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f2ccf4b9af437ffb5b939b0d6e285aec6a06ad366de33a8eae2772412dcd844] <==
	I1129 10:17:40.069477       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 10:17:40.077089       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 10:17:40.081002       1 aggregator.go:166] initial CRD sync complete...
	I1129 10:17:40.081085       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 10:17:40.081117       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:17:40.081156       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:17:40.095236       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1129 10:17:40.114069       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1129 10:17:40.156143       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1129 10:17:40.372354       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:17:40.810485       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 10:17:40.816273       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 10:17:40.816901       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:17:41.654689       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:17:41.700939       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:17:41.805011       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 10:17:41.811885       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 10:17:41.812969       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 10:17:41.819720       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:17:41.847923       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 10:17:43.607890       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 10:17:43.625438       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 10:17:43.637484       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1129 10:17:54.806611       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1129 10:17:55.575559       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [56549fbe4e0a0b61b7412ba5a562fc074ec1299abbe92b0da2f821a6c4c98339] <==
	I1129 10:17:54.834696       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-685516" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1129 10:17:54.839406       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-685516" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1129 10:17:54.865324       1 shared_informer.go:318] Caches are synced for resource quota
	I1129 10:17:55.246176       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 10:17:55.246214       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 10:17:55.259586       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 10:17:55.624046       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lqwmk"
	I1129 10:17:55.643807       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kjgl5"
	I1129 10:17:55.859373       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-f74lw"
	I1129 10:17:55.909560       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-tpdzb"
	I1129 10:17:55.960733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.144704784s"
	I1129 10:17:56.007464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.677362ms"
	I1129 10:17:56.033133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.50506ms"
	I1129 10:17:56.033352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.943µs"
	I1129 10:17:56.615054       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1129 10:17:56.654710       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-f74lw"
	I1129 10:17:56.668396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.940985ms"
	I1129 10:17:56.684351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.878777ms"
	I1129 10:17:56.684604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.65µs"
	I1129 10:18:11.199080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.443µs"
	I1129 10:18:11.226147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.513µs"
	I1129 10:18:11.980055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.082µs"
	I1129 10:18:13.000827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.104482ms"
	I1129 10:18:13.002154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.109199ms"
	I1129 10:18:14.816890       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [84b1461b8655b893b0ba0097e653d428d17239cfa89b6944170d9de2e251ae1c] <==
	I1129 10:17:57.890227       1 server_others.go:69] "Using iptables proxy"
	I1129 10:17:57.916661       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1129 10:17:57.947483       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:17:57.949603       1 server_others.go:152] "Using iptables Proxier"
	I1129 10:17:57.949709       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 10:17:57.949741       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 10:17:57.949826       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 10:17:57.950058       1 server.go:846] "Version info" version="v1.28.0"
	I1129 10:17:57.950392       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:17:57.952293       1 config.go:188] "Starting service config controller"
	I1129 10:17:57.952384       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 10:17:57.952443       1 config.go:97] "Starting endpoint slice config controller"
	I1129 10:17:57.952480       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 10:17:57.953920       1 config.go:315] "Starting node config controller"
	I1129 10:17:57.954052       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 10:17:58.053494       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 10:17:58.053503       1 shared_informer.go:318] Caches are synced for service config
	I1129 10:17:58.054931       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b9b99bf7d2d32b58b19bc9a52d9fc60f5c5d9ab4128baec459de2331607076c6] <==
	W1129 10:17:41.078444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1129 10:17:41.078463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1129 10:17:41.081488       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1129 10:17:41.081595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1129 10:17:41.081632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1129 10:17:41.081682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1129 10:17:41.081715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1129 10:17:41.081758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1129 10:17:41.081876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1129 10:17:41.081917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1129 10:17:41.082292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1129 10:17:41.082348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1129 10:17:41.082520       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:17:41.082559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1129 10:17:41.082593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1129 10:17:41.082563       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:17:41.082659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1129 10:17:41.082677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1129 10:17:41.082717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1129 10:17:41.082959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1129 10:17:41.082730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1129 10:17:41.083063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1129 10:17:41.082771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1129 10:17:41.083131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1129 10:17:42.065251       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 10:17:55 old-k8s-version-685516 kubelet[1359]: I1129 10:17:55.825151    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxcq5\" (UniqueName: \"kubernetes.io/projected/40a4871d-ed30-4509-b7be-30f31f9bf40f-kube-api-access-dxcq5\") pod \"kube-proxy-lqwmk\" (UID: \"40a4871d-ed30-4509-b7be-30f31f9bf40f\") " pod="kube-system/kube-proxy-lqwmk"
	Nov 29 10:17:55 old-k8s-version-685516 kubelet[1359]: I1129 10:17:55.825187    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1845614a-a695-4e01-9942-51df13c347cf-cni-cfg\") pod \"kindnet-kjgl5\" (UID: \"1845614a-a695-4e01-9942-51df13c347cf\") " pod="kube-system/kindnet-kjgl5"
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.926668    1359 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.926805    1359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40a4871d-ed30-4509-b7be-30f31f9bf40f-kube-proxy podName:40a4871d-ed30-4509-b7be-30f31f9bf40f nodeName:}" failed. No retries permitted until 2025-11-29 10:17:57.426764491 +0000 UTC m=+13.848072742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/40a4871d-ed30-4509-b7be-30f31f9bf40f-kube-proxy") pod "kube-proxy-lqwmk" (UID: "40a4871d-ed30-4509-b7be-30f31f9bf40f") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.990816    1359 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.990864    1359 projected.go:198] Error preparing data for projected volume kube-api-access-dxcq5 for pod kube-system/kube-proxy-lqwmk: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.990954    1359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/40a4871d-ed30-4509-b7be-30f31f9bf40f-kube-api-access-dxcq5 podName:40a4871d-ed30-4509-b7be-30f31f9bf40f nodeName:}" failed. No retries permitted until 2025-11-29 10:17:57.490932776 +0000 UTC m=+13.912241027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dxcq5" (UniqueName: "kubernetes.io/projected/40a4871d-ed30-4509-b7be-30f31f9bf40f-kube-api-access-dxcq5") pod "kube-proxy-lqwmk" (UID: "40a4871d-ed30-4509-b7be-30f31f9bf40f") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.998737    1359 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.998913    1359 projected.go:198] Error preparing data for projected volume kube-api-access-c2hzs for pod kube-system/kindnet-kjgl5: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:56 old-k8s-version-685516 kubelet[1359]: E1129 10:17:56.999036    1359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1845614a-a695-4e01-9942-51df13c347cf-kube-api-access-c2hzs podName:1845614a-a695-4e01-9942-51df13c347cf nodeName:}" failed. No retries permitted until 2025-11-29 10:17:57.499014041 +0000 UTC m=+13.920322292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c2hzs" (UniqueName: "kubernetes.io/projected/1845614a-a695-4e01-9942-51df13c347cf-kube-api-access-c2hzs") pod "kindnet-kjgl5" (UID: "1845614a-a695-4e01-9942-51df13c347cf") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:17:57 old-k8s-version-685516 kubelet[1359]: W1129 10:17:57.833701    1359 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/crio-9a56309fe5c9c02e0f245f5c22d4eb10cb6cf533952c47c65f0ea0f64f291716 WatchSource:0}: Error finding container 9a56309fe5c9c02e0f245f5c22d4eb10cb6cf533952c47c65f0ea0f64f291716: Status 404 returned error can't find the container with id 9a56309fe5c9c02e0f245f5c22d4eb10cb6cf533952c47c65f0ea0f64f291716
	Nov 29 10:18:00 old-k8s-version-685516 kubelet[1359]: I1129 10:18:00.927328    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lqwmk" podStartSLOduration=5.927282032 podCreationTimestamp="2025-11-29 10:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:17:57.915607491 +0000 UTC m=+14.336915742" watchObservedRunningTime="2025-11-29 10:18:00.927282032 +0000 UTC m=+17.348590283"
	Nov 29 10:18:03 old-k8s-version-685516 kubelet[1359]: I1129 10:18:03.782520    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-kjgl5" podStartSLOduration=6.001716206 podCreationTimestamp="2025-11-29 10:17:55 +0000 UTC" firstStartedPulling="2025-11-29 10:17:57.83864282 +0000 UTC m=+14.259951079" lastFinishedPulling="2025-11-29 10:18:00.619399901 +0000 UTC m=+17.040708152" observedRunningTime="2025-11-29 10:18:00.929239617 +0000 UTC m=+17.350547885" watchObservedRunningTime="2025-11-29 10:18:03.782473279 +0000 UTC m=+20.203781530"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.159940    1359 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.192172    1359 topology_manager.go:215] "Topology Admit Handler" podUID="13c1253b-cf78-454d-a5a4-397e98f7ed48" podNamespace="kube-system" podName="storage-provisioner"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.197916    1359 topology_manager.go:215] "Topology Admit Handler" podUID="29876dde-8614-4eb6-8b96-b3874f249d0f" podNamespace="kube-system" podName="coredns-5dd5756b68-tpdzb"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.379608    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79pj\" (UniqueName: \"kubernetes.io/projected/13c1253b-cf78-454d-a5a4-397e98f7ed48-kube-api-access-p79pj\") pod \"storage-provisioner\" (UID: \"13c1253b-cf78-454d-a5a4-397e98f7ed48\") " pod="kube-system/storage-provisioner"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.379821    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/13c1253b-cf78-454d-a5a4-397e98f7ed48-tmp\") pod \"storage-provisioner\" (UID: \"13c1253b-cf78-454d-a5a4-397e98f7ed48\") " pod="kube-system/storage-provisioner"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.379970    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29876dde-8614-4eb6-8b96-b3874f249d0f-config-volume\") pod \"coredns-5dd5756b68-tpdzb\" (UID: \"29876dde-8614-4eb6-8b96-b3874f249d0f\") " pod="kube-system/coredns-5dd5756b68-tpdzb"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.380034    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfr92\" (UniqueName: \"kubernetes.io/projected/29876dde-8614-4eb6-8b96-b3874f249d0f-kube-api-access-xfr92\") pod \"coredns-5dd5756b68-tpdzb\" (UID: \"29876dde-8614-4eb6-8b96-b3874f249d0f\") " pod="kube-system/coredns-5dd5756b68-tpdzb"
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: W1129 10:18:11.842553    1359 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/crio-de65514ee4ca6854639e481293603c41e333804e4edd0577ea76fc5412bcf6a1 WatchSource:0}: Error finding container de65514ee4ca6854639e481293603c41e333804e4edd0577ea76fc5412bcf6a1: Status 404 returned error can't find the container with id de65514ee4ca6854639e481293603c41e333804e4edd0577ea76fc5412bcf6a1
	Nov 29 10:18:11 old-k8s-version-685516 kubelet[1359]: I1129 10:18:11.997781    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-tpdzb" podStartSLOduration=16.997728282 podCreationTimestamp="2025-11-29 10:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:18:11.97907024 +0000 UTC m=+28.400378491" watchObservedRunningTime="2025-11-29 10:18:11.997728282 +0000 UTC m=+28.419036532"
	Nov 29 10:18:12 old-k8s-version-685516 kubelet[1359]: I1129 10:18:12.987120    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.987060808 podCreationTimestamp="2025-11-29 10:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:18:11.999480835 +0000 UTC m=+28.420789094" watchObservedRunningTime="2025-11-29 10:18:12.987060808 +0000 UTC m=+29.408369067"
	Nov 29 10:18:14 old-k8s-version-685516 kubelet[1359]: I1129 10:18:14.843054    1359 topology_manager.go:215] "Topology Admit Handler" podUID="50ae449b-ebf3-4617-bb6c-7e100cb4c66c" podNamespace="default" podName="busybox"
	Nov 29 10:18:15 old-k8s-version-685516 kubelet[1359]: I1129 10:18:15.005829    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h2zz\" (UniqueName: \"kubernetes.io/projected/50ae449b-ebf3-4617-bb6c-7e100cb4c66c-kube-api-access-6h2zz\") pod \"busybox\" (UID: \"50ae449b-ebf3-4617-bb6c-7e100cb4c66c\") " pod="default/busybox"
	
	
	==> storage-provisioner [98c562f37d9b699e2c2b921be338d19c6ae7f2209313ed70711301ca39a8b304] <==
	I1129 10:18:11.900342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:18:11.915485       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:18:11.916064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 10:18:11.928407       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:18:11.928653       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-685516_97b05bdf-4678-4b80-b8ed-6ee00afdd3a7!
	I1129 10:18:11.931142       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"72126377-2253-4702-b95c-9156b1e866c0", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-685516_97b05bdf-4678-4b80-b8ed-6ee00afdd3a7 became leader
	I1129 10:18:12.030736       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-685516_97b05bdf-4678-4b80-b8ed-6ee00afdd3a7!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-685516 -n old-k8s-version-685516
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-685516 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-685516 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-685516 --alsologtostderr -v=1: exit status 80 (1.934401397s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-685516 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 10:19:48.242885  494926 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:19:48.243019  494926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:19:48.243030  494926 out.go:374] Setting ErrFile to fd 2...
	I1129 10:19:48.243037  494926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:19:48.243471  494926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:19:48.243814  494926 out.go:368] Setting JSON to false
	I1129 10:19:48.243858  494926 mustload.go:66] Loading cluster: old-k8s-version-685516
	I1129 10:19:48.244948  494926 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:19:48.245499  494926 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:19:48.264425  494926 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:19:48.264746  494926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:19:48.325329  494926 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:19:48.313256851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:19:48.326002  494926 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-685516 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 10:19:48.330102  494926 out.go:179] * Pausing node old-k8s-version-685516 ... 
	I1129 10:19:48.334481  494926 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:19:48.334830  494926 ssh_runner.go:195] Run: systemctl --version
	I1129 10:19:48.334882  494926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:19:48.352171  494926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:19:48.488889  494926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:19:48.514865  494926 pause.go:52] kubelet running: true
	I1129 10:19:48.514976  494926 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:19:48.745989  494926 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:19:48.746120  494926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:19:48.813054  494926 cri.go:89] found id: "0bd8bc1b44bd5ffbfce43bb9a656fe55922bf48d04059d3e194d0dd90434d60c"
	I1129 10:19:48.813118  494926 cri.go:89] found id: "d96ba3e30860d33e2f34e2ae4074b7ceb1ace88087c8a2c37af7d1d051febb85"
	I1129 10:19:48.813136  494926 cri.go:89] found id: "b71f8c879d3e9e455d4502f8326ee0cf5bf4bb869ea5e9562cf9852c0a2fe2af"
	I1129 10:19:48.813156  494926 cri.go:89] found id: "c98f1ec102e6f68de9416d8a551b299409994ef27c0a89617c909e25d050a785"
	I1129 10:19:48.813175  494926 cri.go:89] found id: "928f645b1c52ecd3e1c677f3a9e0b0399d30209d88dd4d78fed8518ca1694aec"
	I1129 10:19:48.813201  494926 cri.go:89] found id: "d1ccde7192273d2c873fdd2640ce8601dedd0ee0717e723a71f05622c0cc2fd4"
	I1129 10:19:48.813223  494926 cri.go:89] found id: "6424c9687943bc7c9e2e9f3278b936d7cc5ac18aa5c44f37f9e424a325554a2d"
	I1129 10:19:48.813242  494926 cri.go:89] found id: "87200c874b1d75d3c50007e8a0e4cceae03a3a03b20279aba051adafd491eeea"
	I1129 10:19:48.813262  494926 cri.go:89] found id: "937bd7fba17a336453e2bbe35345a1c8f3fd0dfc08b79ebfff1e6b375e4b15ca"
	I1129 10:19:48.813283  494926 cri.go:89] found id: "f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738"
	I1129 10:19:48.813309  494926 cri.go:89] found id: "9e58c1c388072eb6bdd4fe933f31cef732fec984bbe7e352218d505d47ad45b3"
	I1129 10:19:48.813330  494926 cri.go:89] found id: ""
	I1129 10:19:48.813410  494926 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:19:48.824585  494926 retry.go:31] will retry after 359.31172ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:19:48Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:19:49.184192  494926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:19:49.197029  494926 pause.go:52] kubelet running: false
	I1129 10:19:49.197103  494926 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:19:49.370759  494926 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:19:49.370833  494926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:19:49.440364  494926 cri.go:89] found id: "0bd8bc1b44bd5ffbfce43bb9a656fe55922bf48d04059d3e194d0dd90434d60c"
	I1129 10:19:49.440388  494926 cri.go:89] found id: "d96ba3e30860d33e2f34e2ae4074b7ceb1ace88087c8a2c37af7d1d051febb85"
	I1129 10:19:49.440394  494926 cri.go:89] found id: "b71f8c879d3e9e455d4502f8326ee0cf5bf4bb869ea5e9562cf9852c0a2fe2af"
	I1129 10:19:49.440398  494926 cri.go:89] found id: "c98f1ec102e6f68de9416d8a551b299409994ef27c0a89617c909e25d050a785"
	I1129 10:19:49.440401  494926 cri.go:89] found id: "928f645b1c52ecd3e1c677f3a9e0b0399d30209d88dd4d78fed8518ca1694aec"
	I1129 10:19:49.440405  494926 cri.go:89] found id: "d1ccde7192273d2c873fdd2640ce8601dedd0ee0717e723a71f05622c0cc2fd4"
	I1129 10:19:49.440408  494926 cri.go:89] found id: "6424c9687943bc7c9e2e9f3278b936d7cc5ac18aa5c44f37f9e424a325554a2d"
	I1129 10:19:49.440411  494926 cri.go:89] found id: "87200c874b1d75d3c50007e8a0e4cceae03a3a03b20279aba051adafd491eeea"
	I1129 10:19:49.440414  494926 cri.go:89] found id: "937bd7fba17a336453e2bbe35345a1c8f3fd0dfc08b79ebfff1e6b375e4b15ca"
	I1129 10:19:49.440420  494926 cri.go:89] found id: "f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738"
	I1129 10:19:49.440424  494926 cri.go:89] found id: "9e58c1c388072eb6bdd4fe933f31cef732fec984bbe7e352218d505d47ad45b3"
	I1129 10:19:49.440428  494926 cri.go:89] found id: ""
	I1129 10:19:49.440496  494926 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:19:49.451616  494926 retry.go:31] will retry after 363.349903ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:19:49Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:19:49.815150  494926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:19:49.829395  494926 pause.go:52] kubelet running: false
	I1129 10:19:49.829513  494926 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:19:50.018882  494926 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:19:50.019008  494926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:19:50.095409  494926 cri.go:89] found id: "0bd8bc1b44bd5ffbfce43bb9a656fe55922bf48d04059d3e194d0dd90434d60c"
	I1129 10:19:50.095432  494926 cri.go:89] found id: "d96ba3e30860d33e2f34e2ae4074b7ceb1ace88087c8a2c37af7d1d051febb85"
	I1129 10:19:50.095437  494926 cri.go:89] found id: "b71f8c879d3e9e455d4502f8326ee0cf5bf4bb869ea5e9562cf9852c0a2fe2af"
	I1129 10:19:50.095441  494926 cri.go:89] found id: "c98f1ec102e6f68de9416d8a551b299409994ef27c0a89617c909e25d050a785"
	I1129 10:19:50.095444  494926 cri.go:89] found id: "928f645b1c52ecd3e1c677f3a9e0b0399d30209d88dd4d78fed8518ca1694aec"
	I1129 10:19:50.095451  494926 cri.go:89] found id: "d1ccde7192273d2c873fdd2640ce8601dedd0ee0717e723a71f05622c0cc2fd4"
	I1129 10:19:50.095476  494926 cri.go:89] found id: "6424c9687943bc7c9e2e9f3278b936d7cc5ac18aa5c44f37f9e424a325554a2d"
	I1129 10:19:50.095485  494926 cri.go:89] found id: "87200c874b1d75d3c50007e8a0e4cceae03a3a03b20279aba051adafd491eeea"
	I1129 10:19:50.095488  494926 cri.go:89] found id: "937bd7fba17a336453e2bbe35345a1c8f3fd0dfc08b79ebfff1e6b375e4b15ca"
	I1129 10:19:50.095504  494926 cri.go:89] found id: "f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738"
	I1129 10:19:50.095512  494926 cri.go:89] found id: "9e58c1c388072eb6bdd4fe933f31cef732fec984bbe7e352218d505d47ad45b3"
	I1129 10:19:50.095515  494926 cri.go:89] found id: ""
	I1129 10:19:50.095577  494926 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:19:50.111120  494926 out.go:203] 
	W1129 10:19:50.114056  494926 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:19:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:19:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 10:19:50.114108  494926 out.go:285] * 
	* 
	W1129 10:19:50.121230  494926 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 10:19:50.124367  494926 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-685516 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-685516
helpers_test.go:243: (dbg) docker inspect old-k8s-version-685516:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda",
	        "Created": "2025-11-29T10:17:19.016539964Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492833,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:18:40.276987529Z",
	            "FinishedAt": "2025-11-29T10:18:39.436046621Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/hostname",
	        "HostsPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/hosts",
	        "LogPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda-json.log",
	        "Name": "/old-k8s-version-685516",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-685516:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-685516",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda",
	                "LowerDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-685516",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-685516/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-685516",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-685516",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-685516",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f5e9050c206a6ce63cb87d7123b687bfa8d3dff71da5de6930de4618eb88074",
	            "SandboxKey": "/var/run/docker/netns/8f5e9050c206",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-685516": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:70:ba:40:50:d5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef6362a531f73ce6ec3b16d1e169336b1eaf8a28a088fdb25af281248ccfdc3e",
	                    "EndpointID": "72a05371592ade20004cc7392b9af228e4824b681979451f39b5830d70ece687",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-685516",
	                        "e87cb8cc4025"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516: exit status 2 (361.904924ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-685516 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-685516 logs -n 25: (1.39986275s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-151203 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo containerd config dump                                                                                                                                                                                                  │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo crio config                                                                                                                                                                                                             │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ delete  │ -p cilium-151203                                                                                                                                                                                                                              │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:15 UTC │
	│ start   │ -p force-systemd-env-510051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p force-systemd-env-510051                                                                                                                                                                                                                   │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-930117   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p running-upgrade-493711                                                                                                                                                                                                                     │ running-upgrade-493711   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-033056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-033056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-033056                                                                                                                                                                                                                        │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-685516 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-685516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:19 UTC │
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                                                                                               │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:18:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:18:39.963949  492705 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:18:39.964077  492705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:18:39.964082  492705 out.go:374] Setting ErrFile to fd 2...
	I1129 10:18:39.964087  492705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:18:39.964335  492705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:18:39.964725  492705 out.go:368] Setting JSON to false
	I1129 10:18:39.965663  492705 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10869,"bootTime":1764400651,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:18:39.965741  492705 start.go:143] virtualization:  
	I1129 10:18:39.971914  492705 out.go:179] * [old-k8s-version-685516] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:18:39.975079  492705 notify.go:221] Checking for updates...
	I1129 10:18:39.975853  492705 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:18:39.978961  492705 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:18:39.982021  492705 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:18:39.985002  492705 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:18:39.987992  492705 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:18:39.990887  492705 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:18:39.994215  492705 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:18:39.997674  492705 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1129 10:18:40.001047  492705 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:18:40.053196  492705 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:18:40.053356  492705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:18:40.120111  492705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:18:40.109863618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:18:40.120252  492705 docker.go:319] overlay module found
	I1129 10:18:40.123516  492705 out.go:179] * Using the docker driver based on existing profile
	I1129 10:18:40.126591  492705 start.go:309] selected driver: docker
	I1129 10:18:40.126623  492705 start.go:927] validating driver "docker" against &{Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:18:40.126743  492705 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:18:40.127576  492705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:18:40.187163  492705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:18:40.176435258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:18:40.187597  492705 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:18:40.187651  492705 cni.go:84] Creating CNI manager for ""
	I1129 10:18:40.187719  492705 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:18:40.187766  492705 start.go:353] cluster config:
	{Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:18:40.190969  492705 out.go:179] * Starting "old-k8s-version-685516" primary control-plane node in "old-k8s-version-685516" cluster
	I1129 10:18:40.194205  492705 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:18:40.197328  492705 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:18:40.201625  492705 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 10:18:40.201680  492705 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1129 10:18:40.201691  492705 cache.go:65] Caching tarball of preloaded images
	I1129 10:18:40.201725  492705 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:18:40.201782  492705 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:18:40.201794  492705 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1129 10:18:40.201905  492705 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/config.json ...
	I1129 10:18:40.223559  492705 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:18:40.223585  492705 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:18:40.223605  492705 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:18:40.223641  492705 start.go:360] acquireMachinesLock for old-k8s-version-685516: {Name:mk7482d2fe027ea0120ebabcf8485e86c0be82ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:18:40.223714  492705 start.go:364] duration metric: took 46.844µs to acquireMachinesLock for "old-k8s-version-685516"
	I1129 10:18:40.223738  492705 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:18:40.223752  492705 fix.go:54] fixHost starting: 
	I1129 10:18:40.224005  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:40.241600  492705 fix.go:112] recreateIfNeeded on old-k8s-version-685516: state=Stopped err=<nil>
	W1129 10:18:40.241629  492705 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:18:40.244967  492705 out.go:252] * Restarting existing docker container for "old-k8s-version-685516" ...
	I1129 10:18:40.245055  492705 cli_runner.go:164] Run: docker start old-k8s-version-685516
	I1129 10:18:40.491816  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:40.513696  492705 kic.go:430] container "old-k8s-version-685516" state is running.
	I1129 10:18:40.514188  492705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:18:40.542152  492705 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/config.json ...
	I1129 10:18:40.542417  492705 machine.go:94] provisionDockerMachine start ...
	I1129 10:18:40.542490  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:40.560162  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:40.560492  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:40.560508  492705 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:18:40.561220  492705 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 10:18:43.713639  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-685516
	
	I1129 10:18:43.713665  492705 ubuntu.go:182] provisioning hostname "old-k8s-version-685516"
	I1129 10:18:43.713750  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:43.730716  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:43.731035  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:43.731053  492705 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-685516 && echo "old-k8s-version-685516" | sudo tee /etc/hostname
	I1129 10:18:43.891058  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-685516
	
	I1129 10:18:43.891151  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:43.910137  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:43.910477  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:43.910500  492705 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-685516' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-685516/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-685516' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:18:44.062576  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:18:44.062600  492705 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:18:44.062625  492705 ubuntu.go:190] setting up certificates
	I1129 10:18:44.062636  492705 provision.go:84] configureAuth start
	I1129 10:18:44.062696  492705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:18:44.080652  492705 provision.go:143] copyHostCerts
	I1129 10:18:44.080733  492705 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:18:44.080748  492705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:18:44.080828  492705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:18:44.080957  492705 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:18:44.080970  492705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:18:44.081000  492705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:18:44.081062  492705 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:18:44.081072  492705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:18:44.081098  492705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:18:44.081172  492705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-685516 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-685516]
	I1129 10:18:44.505871  492705 provision.go:177] copyRemoteCerts
	I1129 10:18:44.505950  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:18:44.506003  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:44.529450  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:44.633775  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:18:44.651923  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 10:18:44.669717  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:18:44.687720  492705 provision.go:87] duration metric: took 625.069369ms to configureAuth
	I1129 10:18:44.687747  492705 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:18:44.687945  492705 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:18:44.688048  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:44.705273  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:44.705579  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:44.705593  492705 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:18:45.139681  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:18:45.139713  492705 machine.go:97] duration metric: took 4.597282599s to provisionDockerMachine
	I1129 10:18:45.139735  492705 start.go:293] postStartSetup for "old-k8s-version-685516" (driver="docker")
	I1129 10:18:45.139818  492705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:18:45.139923  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:18:45.139997  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.161865  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.279602  492705 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:18:45.287071  492705 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:18:45.287103  492705 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:18:45.287117  492705 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:18:45.287175  492705 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:18:45.287259  492705 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:18:45.287369  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:18:45.295596  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:18:45.316836  492705 start.go:296] duration metric: took 177.015422ms for postStartSetup
	I1129 10:18:45.316959  492705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:18:45.317043  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.339386  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.443136  492705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:18:45.448083  492705 fix.go:56] duration metric: took 5.224324748s for fixHost
	I1129 10:18:45.448120  492705 start.go:83] releasing machines lock for "old-k8s-version-685516", held for 5.22438275s
	I1129 10:18:45.448208  492705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:18:45.465009  492705 ssh_runner.go:195] Run: cat /version.json
	I1129 10:18:45.465053  492705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:18:45.465068  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.465109  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.487480  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.499054  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.602814  492705 ssh_runner.go:195] Run: systemctl --version
	I1129 10:18:45.693664  492705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:18:45.745473  492705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:18:45.750025  492705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:18:45.750240  492705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:18:45.759219  492705 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:18:45.759247  492705 start.go:496] detecting cgroup driver to use...
	I1129 10:18:45.759279  492705 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:18:45.759339  492705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:18:45.774776  492705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:18:45.788739  492705 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:18:45.788853  492705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:18:45.804632  492705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:18:45.817878  492705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:18:45.926990  492705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:18:46.058426  492705 docker.go:234] disabling docker service ...
	I1129 10:18:46.058537  492705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:18:46.074487  492705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:18:46.087816  492705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:18:46.207223  492705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:18:46.327680  492705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:18:46.340445  492705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:18:46.357631  492705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1129 10:18:46.357722  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.367700  492705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:18:46.367791  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.377033  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.386979  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.396657  492705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:18:46.405636  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.415699  492705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.424857  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.434426  492705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:18:46.441952  492705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:18:46.449355  492705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:18:46.566247  492705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:18:46.752494  492705 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:18:46.752625  492705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:18:46.759476  492705 start.go:564] Will wait 60s for crictl version
	I1129 10:18:46.759560  492705 ssh_runner.go:195] Run: which crictl
	I1129 10:18:46.764031  492705 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:18:46.791956  492705 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:18:46.792076  492705 ssh_runner.go:195] Run: crio --version
	I1129 10:18:46.821367  492705 ssh_runner.go:195] Run: crio --version
	I1129 10:18:46.854361  492705 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1129 10:18:46.857106  492705 cli_runner.go:164] Run: docker network inspect old-k8s-version-685516 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:18:46.873109  492705 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 10:18:46.877143  492705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:18:46.886908  492705 kubeadm.go:884] updating cluster {Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:18:46.887040  492705 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 10:18:46.887091  492705 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:18:46.923599  492705 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:18:46.923625  492705 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:18:46.923679  492705 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:18:46.949476  492705 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:18:46.949500  492705 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:18:46.949508  492705 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1129 10:18:46.949606  492705 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-685516 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:18:46.949685  492705 ssh_runner.go:195] Run: crio config
	I1129 10:18:47.018359  492705 cni.go:84] Creating CNI manager for ""
	I1129 10:18:47.018386  492705 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:18:47.026156  492705 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:18:47.026261  492705 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-685516 NodeName:old-k8s-version-685516 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:18:47.026663  492705 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-685516"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:18:47.026767  492705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 10:18:47.035016  492705 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:18:47.035098  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:18:47.043081  492705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1129 10:18:47.056461  492705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:18:47.069030  492705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1129 10:18:47.081659  492705 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:18:47.085219  492705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:18:47.095080  492705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:18:47.210325  492705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:18:47.226144  492705 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516 for IP: 192.168.85.2
	I1129 10:18:47.226179  492705 certs.go:195] generating shared ca certs ...
	I1129 10:18:47.226217  492705 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:47.226418  492705 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:18:47.226512  492705 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:18:47.226529  492705 certs.go:257] generating profile certs ...
	I1129 10:18:47.226655  492705 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.key
	I1129 10:18:47.226781  492705 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key.a7d871e6
	I1129 10:18:47.226866  492705 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key
	I1129 10:18:47.227039  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:18:47.227102  492705 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:18:47.227118  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:18:47.227150  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:18:47.227208  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:18:47.227257  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:18:47.227354  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:18:47.228053  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:18:47.252414  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:18:47.274591  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:18:47.296101  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:18:47.318524  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 10:18:47.339732  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:18:47.357353  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:18:47.386160  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:18:47.417911  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:18:47.439626  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:18:47.459874  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:18:47.478552  492705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:18:47.491768  492705 ssh_runner.go:195] Run: openssl version
	I1129 10:18:47.497941  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:18:47.507996  492705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:18:47.511791  492705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:18:47.511855  492705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:18:47.555293  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:18:47.564744  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:18:47.572965  492705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:18:47.576897  492705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:18:47.576964  492705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:18:47.618086  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:18:47.625872  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:18:47.634377  492705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:18:47.638355  492705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:18:47.638438  492705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:18:47.679391  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:18:47.687071  492705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:18:47.690690  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:18:47.731540  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:18:47.771986  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:18:47.813518  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:18:47.857273  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:18:47.914363  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:18:47.958973  492705 kubeadm.go:401] StartCluster: {Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:18:47.959121  492705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:18:47.959232  492705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:18:48.001641  492705 cri.go:89] found id: ""
	I1129 10:18:48.001789  492705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:18:48.030377  492705 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:18:48.030402  492705 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:18:48.030495  492705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:18:48.040905  492705 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:18:48.041749  492705 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-685516" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:18:48.042123  492705 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-685516" cluster setting kubeconfig missing "old-k8s-version-685516" context setting]
	I1129 10:18:48.042688  492705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:48.044731  492705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:18:48.057827  492705 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 10:18:48.057861  492705 kubeadm.go:602] duration metric: took 27.44143ms to restartPrimaryControlPlane
	I1129 10:18:48.057871  492705 kubeadm.go:403] duration metric: took 98.912523ms to StartCluster
	I1129 10:18:48.057887  492705 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:48.057952  492705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:18:48.058958  492705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:48.059211  492705 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:18:48.059486  492705 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:18:48.059527  492705 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:18:48.059591  492705 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-685516"
	I1129 10:18:48.059606  492705 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-685516"
	W1129 10:18:48.059613  492705 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:18:48.059643  492705 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:18:48.060078  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.060429  492705 addons.go:70] Setting dashboard=true in profile "old-k8s-version-685516"
	I1129 10:18:48.060456  492705 addons.go:239] Setting addon dashboard=true in "old-k8s-version-685516"
	W1129 10:18:48.060463  492705 addons.go:248] addon dashboard should already be in state true
	I1129 10:18:48.060486  492705 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:18:48.060894  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.061275  492705 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-685516"
	I1129 10:18:48.061298  492705 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-685516"
	I1129 10:18:48.061579  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.063091  492705 out.go:179] * Verifying Kubernetes components...
	I1129 10:18:48.066335  492705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:18:48.133721  492705 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-685516"
	W1129 10:18:48.133745  492705 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:18:48.133769  492705 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:18:48.134253  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.137318  492705 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:18:48.139294  492705 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:18:48.142294  492705 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:18:48.142319  492705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:18:48.142389  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:48.145294  492705 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:18:48.148222  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:18:48.148251  492705 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:18:48.148318  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:48.179325  492705 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:18:48.179354  492705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:18:48.179418  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:48.186327  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:48.228271  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:48.238678  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:48.418104  492705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:18:48.468090  492705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:18:48.474447  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:18:48.474472  492705 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:18:48.516254  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:18:48.516275  492705 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:18:48.536841  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:18:48.536866  492705 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:18:48.567502  492705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:18:48.623049  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:18:48.623075  492705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:18:48.709932  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:18:48.709958  492705 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:18:48.760035  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:18:48.760059  492705 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:18:48.808158  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:18:48.808182  492705 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:18:48.827438  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:18:48.827464  492705 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:18:48.849572  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:18:48.849596  492705 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:18:48.869347  492705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:18:54.339082  492705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.920939133s)
	I1129 10:18:54.339146  492705 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.871031412s)
	I1129 10:18:54.339175  492705 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-685516" to be "Ready" ...
	I1129 10:18:54.339494  492705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.77196294s)
	I1129 10:18:54.381324  492705 node_ready.go:49] node "old-k8s-version-685516" is "Ready"
	I1129 10:18:54.381358  492705 node_ready.go:38] duration metric: took 42.152408ms for node "old-k8s-version-685516" to be "Ready" ...
	I1129 10:18:54.381372  492705 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:18:54.381436  492705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:18:55.340522  492705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.471133758s)
	I1129 10:18:55.340622  492705 api_server.go:72] duration metric: took 7.281380961s to wait for apiserver process to appear ...
	I1129 10:18:55.340652  492705 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:18:55.340700  492705 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:18:55.343846  492705 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-685516 addons enable metrics-server
	
	I1129 10:18:55.346941  492705 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1129 10:18:55.350068  492705 addons.go:530] duration metric: took 7.290534502s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1129 10:18:55.352127  492705 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 10:18:55.358859  492705 api_server.go:141] control plane version: v1.28.0
	I1129 10:18:55.358931  492705 api_server.go:131] duration metric: took 18.259786ms to wait for apiserver health ...
	I1129 10:18:55.358959  492705 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:18:55.367107  492705 system_pods.go:59] 8 kube-system pods found
	I1129 10:18:55.367194  492705 system_pods.go:61] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:55.367233  492705 system_pods.go:61] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:18:55.367260  492705 system_pods.go:61] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:18:55.367286  492705 system_pods.go:61] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:18:55.367320  492705 system_pods.go:61] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:18:55.367341  492705 system_pods.go:61] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:18:55.367370  492705 system_pods.go:61] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:18:55.367402  492705 system_pods.go:61] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:18:55.367423  492705 system_pods.go:74] duration metric: took 8.444257ms to wait for pod list to return data ...
	I1129 10:18:55.367447  492705 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:18:55.371509  492705 default_sa.go:45] found service account: "default"
	I1129 10:18:55.371574  492705 default_sa.go:55] duration metric: took 4.106763ms for default service account to be created ...
	I1129 10:18:55.371600  492705 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:18:55.377598  492705 system_pods.go:86] 8 kube-system pods found
	I1129 10:18:55.377675  492705 system_pods.go:89] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:55.377703  492705 system_pods.go:89] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:18:55.377739  492705 system_pods.go:89] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:18:55.377761  492705 system_pods.go:89] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:18:55.377794  492705 system_pods.go:89] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:18:55.377817  492705 system_pods.go:89] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:18:55.377847  492705 system_pods.go:89] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:18:55.377884  492705 system_pods.go:89] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:18:55.377917  492705 system_pods.go:126] duration metric: took 6.298271ms to wait for k8s-apps to be running ...
	I1129 10:18:55.377941  492705 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:18:55.378011  492705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:18:55.397842  492705 system_svc.go:56] duration metric: took 19.890952ms WaitForService to wait for kubelet
	I1129 10:18:55.397911  492705 kubeadm.go:587] duration metric: took 7.338670757s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:18:55.397947  492705 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:18:55.401846  492705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:18:55.401896  492705 node_conditions.go:123] node cpu capacity is 2
	I1129 10:18:55.401911  492705 node_conditions.go:105] duration metric: took 3.942273ms to run NodePressure ...
	I1129 10:18:55.401925  492705 start.go:242] waiting for startup goroutines ...
	I1129 10:18:55.401933  492705 start.go:247] waiting for cluster config update ...
	I1129 10:18:55.401956  492705 start.go:256] writing updated cluster config ...
	I1129 10:18:55.402300  492705 ssh_runner.go:195] Run: rm -f paused
	I1129 10:18:55.406312  492705 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:18:55.415151  492705 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-tpdzb" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:18:57.421365  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:18:59.421695  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:01.921155  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:03.921782  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:06.423720  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:08.921331  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:10.921664  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:12.921911  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:15.422231  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:17.922854  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:20.421178  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:22.421873  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:24.920955  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:27.426035  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:29.921336  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:32.420614  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:34.421308  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	I1129 10:19:34.920905  492705 pod_ready.go:94] pod "coredns-5dd5756b68-tpdzb" is "Ready"
	I1129 10:19:34.920939  492705 pod_ready.go:86] duration metric: took 39.505756272s for pod "coredns-5dd5756b68-tpdzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.924531  492705 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.930115  492705 pod_ready.go:94] pod "etcd-old-k8s-version-685516" is "Ready"
	I1129 10:19:34.930143  492705 pod_ready.go:86] duration metric: took 5.580955ms for pod "etcd-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.933340  492705 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.938542  492705 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-685516" is "Ready"
	I1129 10:19:34.938570  492705 pod_ready.go:86] duration metric: took 5.203908ms for pod "kube-apiserver-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.941744  492705 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.118927  492705 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-685516" is "Ready"
	I1129 10:19:35.119026  492705 pod_ready.go:86] duration metric: took 177.251196ms for pod "kube-controller-manager-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.322010  492705 pod_ready.go:83] waiting for pod "kube-proxy-lqwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.718879  492705 pod_ready.go:94] pod "kube-proxy-lqwmk" is "Ready"
	I1129 10:19:35.718910  492705 pod_ready.go:86] duration metric: took 396.873466ms for pod "kube-proxy-lqwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.919844  492705 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:36.319360  492705 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-685516" is "Ready"
	I1129 10:19:36.319441  492705 pod_ready.go:86] duration metric: took 399.569104ms for pod "kube-scheduler-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:36.319472  492705 pod_ready.go:40] duration metric: took 40.913125978s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:19:36.378561  492705 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1129 10:19:36.381636  492705 out.go:203] 
	W1129 10:19:36.384558  492705 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 10:19:36.387466  492705 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 10:19:36.390360  492705 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-685516" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.136665062Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.14039322Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.140432146Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.140459412Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.144177313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.144216632Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.14423895Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.147682748Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.147716422Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.147739421Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.152036579Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.152075086Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.413516247Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb0f55b1-46f5-4000-90c8-a104809e7943 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.41470369Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=af5b4e43-1ee3-4f0c-8faf-6fb530a3be64 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.418222336Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper" id=01061956-6d3c-4988-b22d-3bade4f8a154 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.418328774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.42648735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.42701836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.444931723Z" level=info msg="Created container f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper" id=01061956-6d3c-4988-b22d-3bade4f8a154 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.464887301Z" level=info msg="Starting container: f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738" id=4eeccfb5-fb56-4629-9d8c-bec55c328448 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.472363883Z" level=info msg="Started container" PID=1743 containerID=f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper id=4eeccfb5-fb56-4629-9d8c-bec55c328448 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f
	Nov 29 10:19:48 old-k8s-version-685516 conmon[1741]: conmon f82e905beeb9de6e6be4 <ninfo>: container 1743 exited with status 1
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.649483234Z" level=info msg="Removing container: afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e" id=d4ff9c52-8cb0-4bbc-8d78-828890003642 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.658611069Z" level=info msg="Error loading conmon cgroup of container afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e: cgroup deleted" id=d4ff9c52-8cb0-4bbc-8d78-828890003642 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.662828651Z" level=info msg="Removed container afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper" id=d4ff9c52-8cb0-4bbc-8d78-828890003642 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f82e905beeb9d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 seconds ago        Exited              dashboard-metrics-scraper   3                   8be51697a9b3c       dashboard-metrics-scraper-5f989dc9cf-hmsfl       kubernetes-dashboard
	0bd8bc1b44bd5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   de02613d72b8e       storage-provisioner                              kube-system
	9e58c1c388072       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   60e5572996f76       kubernetes-dashboard-8694d4445c-l7922            kubernetes-dashboard
	d96ba3e30860d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   4fa5300dcb2e8       coredns-5dd5756b68-tpdzb                         kube-system
	31953dda15e48       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   9dd3851948c78       busybox                                          default
	b71f8c879d3e9       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   1285b0e589883       kube-proxy-lqwmk                                 kube-system
	c98f1ec102e6f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   de02613d72b8e       storage-provisioner                              kube-system
	928f645b1c52e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   7849cc4a909d5       kindnet-kjgl5                                    kube-system
	d1ccde7192273       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   1d2e99435ed55       kube-apiserver-old-k8s-version-685516            kube-system
	6424c9687943b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   3d1f64dec04dc       etcd-old-k8s-version-685516                      kube-system
	87200c874b1d7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   983f03aa59ee9       kube-controller-manager-old-k8s-version-685516   kube-system
	937bd7fba17a3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   c6d19de0d7809       kube-scheduler-old-k8s-version-685516            kube-system
	
	
	==> coredns [d96ba3e30860d33e2f34e2ae4074b7ceb1ace88087c8a2c37af7d1d051febb85] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37953 - 56312 "HINFO IN 2454487474150258179.9203577140186764541. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043690888s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-685516
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-685516
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-685516
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_17_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:17:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-685516
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:18:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-685516
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0e7612a5-7b98-4dd8-91b7-663bc5a3b138
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-tpdzb                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 etcd-old-k8s-version-685516                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-kjgl5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-old-k8s-version-685516             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-685516    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-lqwmk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-old-k8s-version-685516             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-hmsfl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-l7922             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           117s                   node-controller  Node old-k8s-version-685516 event: Registered Node old-k8s-version-685516 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-685516 status is now: NodeReady
	  Normal  Starting                 64s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-685516 event: Registered Node old-k8s-version-685516 in Controller
	
	
	==> dmesg <==
	[Nov29 09:47] overlayfs: idmapped layers are currently not supported
	[Nov29 09:51] overlayfs: idmapped layers are currently not supported
	[Nov29 09:52] overlayfs: idmapped layers are currently not supported
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6424c9687943bc7c9e2e9f3278b936d7cc5ac18aa5c44f37f9e424a325554a2d] <==
	{"level":"info","ts":"2025-11-29T10:18:48.614773Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-29T10:18:48.614781Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-29T10:18:48.61497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-29T10:18:48.61503Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-29T10:18:48.615802Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-29T10:18:48.615961Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T10:18:48.615982Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T10:18:48.616094Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-29T10:18:48.616101Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-29T10:18:48.617595Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:18:48.617721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:18:49.895222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-29T10:18:49.895349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-29T10:18:49.895403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-29T10:18:49.89544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.895488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.895526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.895557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.898373Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-685516 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T10:18:49.898467Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T10:18:49.899576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-29T10:18:49.898488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T10:18:49.907083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T10:18:49.910176Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T10:18:49.910257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:19:51 up  3:02,  0 user,  load average: 1.44, 2.33, 2.20
	Linux old-k8s-version-685516 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [928f645b1c52ecd3e1c677f3a9e0b0399d30209d88dd4d78fed8518ca1694aec] <==
	I1129 10:18:54.936338       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:18:54.936672       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:18:54.936807       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:18:54.936818       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:18:54.936830       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:18:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:18:55.131326       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:18:55.214199       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:18:55.214325       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:18:55.214526       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:19:25.134565       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:19:25.215222       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:19:25.215240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:19:25.216198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:19:26.714801       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:19:26.714889       1 metrics.go:72] Registering metrics
	I1129 10:19:26.715713       1 controller.go:711] "Syncing nftables rules"
	I1129 10:19:35.130141       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:19:35.130219       1 main.go:301] handling current node
	I1129 10:19:45.137131       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:19:45.137219       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d1ccde7192273d2c873fdd2640ce8601dedd0ee0717e723a71f05622c0cc2fd4] <==
	I1129 10:18:53.470149       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:18:53.477208       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1129 10:18:53.503560       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1129 10:18:53.503600       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1129 10:18:53.503608       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1129 10:18:53.503632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 10:18:53.509702       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 10:18:53.511283       1 aggregator.go:166] initial CRD sync complete...
	I1129 10:18:53.511319       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 10:18:53.511325       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:18:53.511333       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:18:53.512504       1 shared_informer.go:318] Caches are synced for configmaps
	I1129 10:18:53.512597       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1129 10:18:53.569676       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:18:54.108449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:18:55.124564       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 10:18:55.172466       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 10:18:55.200138       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:18:55.223281       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:18:55.234623       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 10:18:55.295413       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.30.232"}
	I1129 10:18:55.332868       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.156.246"}
	I1129 10:19:05.853368       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:19:06.103516       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1129 10:19:06.202708       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [87200c874b1d75d3c50007e8a0e4cceae03a3a03b20279aba051adafd491eeea] <==
	I1129 10:19:06.112985       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1129 10:19:06.321008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="460.096326ms"
	I1129 10:19:06.321124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.363µs"
	I1129 10:19:06.322003       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-l7922"
	I1129 10:19:06.322029       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-hmsfl"
	I1129 10:19:06.343934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="231.682795ms"
	I1129 10:19:06.352579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="243.923838ms"
	I1129 10:19:06.377810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="33.760166ms"
	I1129 10:19:06.382582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.804141ms"
	I1129 10:19:06.382672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.234µs"
	I1129 10:19:06.382675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.792µs"
	I1129 10:19:06.388533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="1.120989ms"
	I1129 10:19:06.406297       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 10:19:06.416417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.63µs"
	I1129 10:19:06.438241       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 10:19:06.438272       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 10:19:11.569493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.324µs"
	I1129 10:19:12.575803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="104.978µs"
	I1129 10:19:13.575161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.898µs"
	I1129 10:19:16.595448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.829815ms"
	I1129 10:19:16.595753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.654µs"
	I1129 10:19:27.620668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.143µs"
	I1129 10:19:34.778355       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.067933ms"
	I1129 10:19:34.778458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.871µs"
	I1129 10:19:36.649185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.628µs"
	
	
	==> kube-proxy [b71f8c879d3e9e455d4502f8326ee0cf5bf4bb869ea5e9562cf9852c0a2fe2af] <==
	I1129 10:18:55.105919       1 server_others.go:69] "Using iptables proxy"
	I1129 10:18:55.128200       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1129 10:18:55.190512       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:18:55.198880       1 server_others.go:152] "Using iptables Proxier"
	I1129 10:18:55.199065       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 10:18:55.199110       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 10:18:55.199176       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 10:18:55.200410       1 server.go:846] "Version info" version="v1.28.0"
	I1129 10:18:55.200488       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:18:55.204888       1 config.go:188] "Starting service config controller"
	I1129 10:18:55.204985       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 10:18:55.205028       1 config.go:97] "Starting endpoint slice config controller"
	I1129 10:18:55.205070       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 10:18:55.205633       1 config.go:315] "Starting node config controller"
	I1129 10:18:55.205705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 10:18:55.305951       1 shared_informer.go:318] Caches are synced for node config
	I1129 10:18:55.306069       1 shared_informer.go:318] Caches are synced for service config
	I1129 10:18:55.306233       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [937bd7fba17a336453e2bbe35345a1c8f3fd0dfc08b79ebfff1e6b375e4b15ca] <==
	I1129 10:18:50.441088       1 serving.go:348] Generated self-signed cert in-memory
	W1129 10:18:53.427348       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 10:18:53.427449       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:18:53.427483       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 10:18:53.427542       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 10:18:53.482722       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1129 10:18:53.482827       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:18:53.485041       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1129 10:18:53.485180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:18:53.485233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1129 10:18:53.485350       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1129 10:18:53.585622       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 10:19:06 old-k8s-version-685516 kubelet[787]: I1129 10:19:06.420845     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q46nv\" (UniqueName: \"kubernetes.io/projected/86ddcc94-563b-4476-a666-88ed2b609a9c-kube-api-access-q46nv\") pod \"dashboard-metrics-scraper-5f989dc9cf-hmsfl\" (UID: \"86ddcc94-563b-4476-a666-88ed2b609a9c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl"
	Nov 29 10:19:06 old-k8s-version-685516 kubelet[787]: W1129 10:19:06.660805     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/crio-8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f WatchSource:0}: Error finding container 8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f: Status 404 returned error can't find the container with id 8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f
	Nov 29 10:19:06 old-k8s-version-685516 kubelet[787]: W1129 10:19:06.683068     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/crio-60e5572996f7668becb9816a22fdd574742ff45a60d6c63b962e445cd994ba1a WatchSource:0}: Error finding container 60e5572996f7668becb9816a22fdd574742ff45a60d6c63b962e445cd994ba1a: Status 404 returned error can't find the container with id 60e5572996f7668becb9816a22fdd574742ff45a60d6c63b962e445cd994ba1a
	Nov 29 10:19:11 old-k8s-version-685516 kubelet[787]: I1129 10:19:11.549449     787 scope.go:117] "RemoveContainer" containerID="246fd874e94146103e7b36c94d06b1f916edb06f082c9448d524e05dead0c91d"
	Nov 29 10:19:12 old-k8s-version-685516 kubelet[787]: I1129 10:19:12.554243     787 scope.go:117] "RemoveContainer" containerID="246fd874e94146103e7b36c94d06b1f916edb06f082c9448d524e05dead0c91d"
	Nov 29 10:19:12 old-k8s-version-685516 kubelet[787]: I1129 10:19:12.554530     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:12 old-k8s-version-685516 kubelet[787]: E1129 10:19:12.554794     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:13 old-k8s-version-685516 kubelet[787]: I1129 10:19:13.559064     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:13 old-k8s-version-685516 kubelet[787]: E1129 10:19:13.559330     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:16 old-k8s-version-685516 kubelet[787]: I1129 10:19:16.631037     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:16 old-k8s-version-685516 kubelet[787]: E1129 10:19:16.631373     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:25 old-k8s-version-685516 kubelet[787]: I1129 10:19:25.588136     787 scope.go:117] "RemoveContainer" containerID="c98f1ec102e6f68de9416d8a551b299409994ef27c0a89617c909e25d050a785"
	Nov 29 10:19:25 old-k8s-version-685516 kubelet[787]: I1129 10:19:25.617823     787 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l7922" podStartSLOduration=10.618616933 podCreationTimestamp="2025-11-29 10:19:06 +0000 UTC" firstStartedPulling="2025-11-29 10:19:06.688313786 +0000 UTC m=+19.460497291" lastFinishedPulling="2025-11-29 10:19:15.686456148 +0000 UTC m=+28.458639661" observedRunningTime="2025-11-29 10:19:16.580294871 +0000 UTC m=+29.352478384" watchObservedRunningTime="2025-11-29 10:19:25.616759303 +0000 UTC m=+38.388942807"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: I1129 10:19:27.413891     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: I1129 10:19:27.596586     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: I1129 10:19:27.596988     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: E1129 10:19:27.597325     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:36 old-k8s-version-685516 kubelet[787]: I1129 10:19:36.631025     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:36 old-k8s-version-685516 kubelet[787]: E1129 10:19:36.631336     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:48 old-k8s-version-685516 kubelet[787]: I1129 10:19:48.412862     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:48 old-k8s-version-685516 kubelet[787]: I1129 10:19:48.647606     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:48 old-k8s-version-685516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:19:48 old-k8s-version-685516 kubelet[787]: I1129 10:19:48.702126     787 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 29 10:19:48 old-k8s-version-685516 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:19:48 old-k8s-version-685516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9e58c1c388072eb6bdd4fe933f31cef732fec984bbe7e352218d505d47ad45b3] <==
	2025/11/29 10:19:15 Starting overwatch
	2025/11/29 10:19:15 Using namespace: kubernetes-dashboard
	2025/11/29 10:19:15 Using in-cluster config to connect to apiserver
	2025/11/29 10:19:15 Using secret token for csrf signing
	2025/11/29 10:19:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:19:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:19:15 Successful initial request to the apiserver, version: v1.28.0
	2025/11/29 10:19:15 Generating JWE encryption key
	2025/11/29 10:19:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:19:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:19:16 Initializing JWE encryption key from synchronized object
	2025/11/29 10:19:16 Creating in-cluster Sidecar client
	2025/11/29 10:19:16 Serving insecurely on HTTP port: 9090
	2025/11/29 10:19:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:19:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0bd8bc1b44bd5ffbfce43bb9a656fe55922bf48d04059d3e194d0dd90434d60c] <==
	I1129 10:19:25.638929       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:19:25.651804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:19:25.651862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 10:19:43.055877       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:19:43.056045       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-685516_9812c4f6-45b8-450f-a1a2-77a2edef42e0!
	I1129 10:19:43.056672       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"72126377-2253-4702-b95c-9156b1e866c0", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-685516_9812c4f6-45b8-450f-a1a2-77a2edef42e0 became leader
	I1129 10:19:43.157009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-685516_9812c4f6-45b8-450f-a1a2-77a2edef42e0!
	
	
	==> storage-provisioner [c98f1ec102e6f68de9416d8a551b299409994ef27c0a89617c909e25d050a785] <==
	I1129 10:18:54.989839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:19:24.996072       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-685516 -n old-k8s-version-685516
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-685516 -n old-k8s-version-685516: exit status 2 (384.717053ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-685516 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-685516
helpers_test.go:243: (dbg) docker inspect old-k8s-version-685516:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda",
	        "Created": "2025-11-29T10:17:19.016539964Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492833,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:18:40.276987529Z",
	            "FinishedAt": "2025-11-29T10:18:39.436046621Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/hostname",
	        "HostsPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/hosts",
	        "LogPath": "/var/lib/docker/containers/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda-json.log",
	        "Name": "/old-k8s-version-685516",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-685516:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-685516",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda",
	                "LowerDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5ccae098555169702e57794faa8bea449404e6d1cae7d804bb1bfdf45c6c9b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-685516",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-685516/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-685516",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-685516",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-685516",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f5e9050c206a6ce63cb87d7123b687bfa8d3dff71da5de6930de4618eb88074",
	            "SandboxKey": "/var/run/docker/netns/8f5e9050c206",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-685516": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:70:ba:40:50:d5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef6362a531f73ce6ec3b16d1e169336b1eaf8a28a088fdb25af281248ccfdc3e",
	                    "EndpointID": "72a05371592ade20004cc7392b9af228e4824b681979451f39b5830d70ece687",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-685516",
	                        "e87cb8cc4025"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516: exit status 2 (366.892281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-685516 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-685516 logs -n 25: (1.33086977s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-151203 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo containerd config dump                                                                                                                                                                                                  │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo crio config                                                                                                                                                                                                             │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ delete  │ -p cilium-151203                                                                                                                                                                                                                              │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:15 UTC │
	│ start   │ -p force-systemd-env-510051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p force-systemd-env-510051                                                                                                                                                                                                                   │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-930117   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p running-upgrade-493711                                                                                                                                                                                                                     │ running-upgrade-493711   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-033056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-033056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-033056                                                                                                                                                                                                                        │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-685516 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-685516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:19 UTC │
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                                                                                               │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:18:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:18:39.963949  492705 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:18:39.964077  492705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:18:39.964082  492705 out.go:374] Setting ErrFile to fd 2...
	I1129 10:18:39.964087  492705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:18:39.964335  492705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:18:39.964725  492705 out.go:368] Setting JSON to false
	I1129 10:18:39.965663  492705 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10869,"bootTime":1764400651,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:18:39.965741  492705 start.go:143] virtualization:  
	I1129 10:18:39.971914  492705 out.go:179] * [old-k8s-version-685516] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:18:39.975079  492705 notify.go:221] Checking for updates...
	I1129 10:18:39.975853  492705 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:18:39.978961  492705 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:18:39.982021  492705 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:18:39.985002  492705 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:18:39.987992  492705 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:18:39.990887  492705 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:18:39.994215  492705 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:18:39.997674  492705 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1129 10:18:40.001047  492705 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:18:40.053196  492705 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:18:40.053356  492705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:18:40.120111  492705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:18:40.109863618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:18:40.120252  492705 docker.go:319] overlay module found
	I1129 10:18:40.123516  492705 out.go:179] * Using the docker driver based on existing profile
	I1129 10:18:40.126591  492705 start.go:309] selected driver: docker
	I1129 10:18:40.126623  492705 start.go:927] validating driver "docker" against &{Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:18:40.126743  492705 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:18:40.127576  492705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:18:40.187163  492705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:18:40.176435258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:18:40.187597  492705 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:18:40.187651  492705 cni.go:84] Creating CNI manager for ""
	I1129 10:18:40.187719  492705 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:18:40.187766  492705 start.go:353] cluster config:
	{Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:18:40.190969  492705 out.go:179] * Starting "old-k8s-version-685516" primary control-plane node in "old-k8s-version-685516" cluster
	I1129 10:18:40.194205  492705 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:18:40.197328  492705 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:18:40.201625  492705 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 10:18:40.201680  492705 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1129 10:18:40.201691  492705 cache.go:65] Caching tarball of preloaded images
	I1129 10:18:40.201725  492705 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:18:40.201782  492705 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:18:40.201794  492705 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1129 10:18:40.201905  492705 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/config.json ...
	I1129 10:18:40.223559  492705 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:18:40.223585  492705 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:18:40.223605  492705 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:18:40.223641  492705 start.go:360] acquireMachinesLock for old-k8s-version-685516: {Name:mk7482d2fe027ea0120ebabcf8485e86c0be82ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:18:40.223714  492705 start.go:364] duration metric: took 46.844µs to acquireMachinesLock for "old-k8s-version-685516"
	I1129 10:18:40.223738  492705 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:18:40.223752  492705 fix.go:54] fixHost starting: 
	I1129 10:18:40.224005  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:40.241600  492705 fix.go:112] recreateIfNeeded on old-k8s-version-685516: state=Stopped err=<nil>
	W1129 10:18:40.241629  492705 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:18:40.244967  492705 out.go:252] * Restarting existing docker container for "old-k8s-version-685516" ...
	I1129 10:18:40.245055  492705 cli_runner.go:164] Run: docker start old-k8s-version-685516
	I1129 10:18:40.491816  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:40.513696  492705 kic.go:430] container "old-k8s-version-685516" state is running.
	I1129 10:18:40.514188  492705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:18:40.542152  492705 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/config.json ...
	I1129 10:18:40.542417  492705 machine.go:94] provisionDockerMachine start ...
	I1129 10:18:40.542490  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:40.560162  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:40.560492  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:40.560508  492705 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:18:40.561220  492705 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 10:18:43.713639  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-685516
	
	I1129 10:18:43.713665  492705 ubuntu.go:182] provisioning hostname "old-k8s-version-685516"
	I1129 10:18:43.713750  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:43.730716  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:43.731035  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:43.731053  492705 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-685516 && echo "old-k8s-version-685516" | sudo tee /etc/hostname
	I1129 10:18:43.891058  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-685516
	
	I1129 10:18:43.891151  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:43.910137  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:43.910477  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:43.910500  492705 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-685516' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-685516/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-685516' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:18:44.062576  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:18:44.062600  492705 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:18:44.062625  492705 ubuntu.go:190] setting up certificates
	I1129 10:18:44.062636  492705 provision.go:84] configureAuth start
	I1129 10:18:44.062696  492705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:18:44.080652  492705 provision.go:143] copyHostCerts
	I1129 10:18:44.080733  492705 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:18:44.080748  492705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:18:44.080828  492705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:18:44.080957  492705 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:18:44.080970  492705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:18:44.081000  492705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:18:44.081062  492705 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:18:44.081072  492705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:18:44.081098  492705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:18:44.081172  492705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-685516 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-685516]
	I1129 10:18:44.505871  492705 provision.go:177] copyRemoteCerts
	I1129 10:18:44.505950  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:18:44.506003  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:44.529450  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:44.633775  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:18:44.651923  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 10:18:44.669717  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:18:44.687720  492705 provision.go:87] duration metric: took 625.069369ms to configureAuth
	I1129 10:18:44.687747  492705 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:18:44.687945  492705 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:18:44.688048  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:44.705273  492705 main.go:143] libmachine: Using SSH client type: native
	I1129 10:18:44.705579  492705 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1129 10:18:44.705593  492705 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:18:45.139681  492705 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:18:45.139713  492705 machine.go:97] duration metric: took 4.597282599s to provisionDockerMachine
	I1129 10:18:45.139735  492705 start.go:293] postStartSetup for "old-k8s-version-685516" (driver="docker")
	I1129 10:18:45.139818  492705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:18:45.139923  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:18:45.139997  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.161865  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.279602  492705 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:18:45.287071  492705 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:18:45.287103  492705 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:18:45.287117  492705 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:18:45.287175  492705 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:18:45.287259  492705 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:18:45.287369  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:18:45.295596  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:18:45.316836  492705 start.go:296] duration metric: took 177.015422ms for postStartSetup
	I1129 10:18:45.316959  492705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:18:45.317043  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.339386  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.443136  492705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:18:45.448083  492705 fix.go:56] duration metric: took 5.224324748s for fixHost
	I1129 10:18:45.448120  492705 start.go:83] releasing machines lock for "old-k8s-version-685516", held for 5.22438275s
	I1129 10:18:45.448208  492705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-685516
	I1129 10:18:45.465009  492705 ssh_runner.go:195] Run: cat /version.json
	I1129 10:18:45.465053  492705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:18:45.465068  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.465109  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:45.487480  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.499054  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:45.602814  492705 ssh_runner.go:195] Run: systemctl --version
	I1129 10:18:45.693664  492705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:18:45.745473  492705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:18:45.750025  492705 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:18:45.750240  492705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:18:45.759219  492705 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:18:45.759247  492705 start.go:496] detecting cgroup driver to use...
	I1129 10:18:45.759279  492705 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:18:45.759339  492705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:18:45.774776  492705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:18:45.788739  492705 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:18:45.788853  492705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:18:45.804632  492705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:18:45.817878  492705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:18:45.926990  492705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:18:46.058426  492705 docker.go:234] disabling docker service ...
	I1129 10:18:46.058537  492705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:18:46.074487  492705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:18:46.087816  492705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:18:46.207223  492705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:18:46.327680  492705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:18:46.340445  492705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:18:46.357631  492705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1129 10:18:46.357722  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.367700  492705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:18:46.367791  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.377033  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.386979  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.396657  492705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:18:46.405636  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.415699  492705 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.424857  492705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:18:46.434426  492705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:18:46.441952  492705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:18:46.449355  492705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:18:46.566247  492705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:18:46.752494  492705 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:18:46.752625  492705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:18:46.759476  492705 start.go:564] Will wait 60s for crictl version
	I1129 10:18:46.759560  492705 ssh_runner.go:195] Run: which crictl
	I1129 10:18:46.764031  492705 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:18:46.791956  492705 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:18:46.792076  492705 ssh_runner.go:195] Run: crio --version
	I1129 10:18:46.821367  492705 ssh_runner.go:195] Run: crio --version
	I1129 10:18:46.854361  492705 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1129 10:18:46.857106  492705 cli_runner.go:164] Run: docker network inspect old-k8s-version-685516 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:18:46.873109  492705 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 10:18:46.877143  492705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:18:46.886908  492705 kubeadm.go:884] updating cluster {Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:18:46.887040  492705 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 10:18:46.887091  492705 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:18:46.923599  492705 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:18:46.923625  492705 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:18:46.923679  492705 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:18:46.949476  492705 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:18:46.949500  492705 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:18:46.949508  492705 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1129 10:18:46.949606  492705 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-685516 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:18:46.949685  492705 ssh_runner.go:195] Run: crio config
	I1129 10:18:47.018359  492705 cni.go:84] Creating CNI manager for ""
	I1129 10:18:47.018386  492705 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:18:47.026156  492705 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:18:47.026261  492705 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-685516 NodeName:old-k8s-version-685516 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:18:47.026663  492705 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-685516"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:18:47.026767  492705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 10:18:47.035016  492705 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:18:47.035098  492705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:18:47.043081  492705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1129 10:18:47.056461  492705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:18:47.069030  492705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1129 10:18:47.081659  492705 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:18:47.085219  492705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:18:47.095080  492705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:18:47.210325  492705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:18:47.226144  492705 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516 for IP: 192.168.85.2
	I1129 10:18:47.226179  492705 certs.go:195] generating shared ca certs ...
	I1129 10:18:47.226217  492705 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:47.226418  492705 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:18:47.226512  492705 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:18:47.226529  492705 certs.go:257] generating profile certs ...
	I1129 10:18:47.226655  492705 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.key
	I1129 10:18:47.226781  492705 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key.a7d871e6
	I1129 10:18:47.226866  492705 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key
	I1129 10:18:47.227039  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:18:47.227102  492705 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:18:47.227118  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:18:47.227150  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:18:47.227208  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:18:47.227257  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:18:47.227354  492705 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:18:47.228053  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:18:47.252414  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:18:47.274591  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:18:47.296101  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:18:47.318524  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 10:18:47.339732  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:18:47.357353  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:18:47.386160  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:18:47.417911  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:18:47.439626  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:18:47.459874  492705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:18:47.478552  492705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:18:47.491768  492705 ssh_runner.go:195] Run: openssl version
	I1129 10:18:47.497941  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:18:47.507996  492705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:18:47.511791  492705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:18:47.511855  492705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:18:47.555293  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:18:47.564744  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:18:47.572965  492705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:18:47.576897  492705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:18:47.576964  492705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:18:47.618086  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:18:47.625872  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:18:47.634377  492705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:18:47.638355  492705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:18:47.638438  492705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:18:47.679391  492705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:18:47.687071  492705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:18:47.690690  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:18:47.731540  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:18:47.771986  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:18:47.813518  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:18:47.857273  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:18:47.914363  492705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:18:47.958973  492705 kubeadm.go:401] StartCluster: {Name:old-k8s-version-685516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-685516 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:18:47.959121  492705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:18:47.959232  492705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:18:48.001641  492705 cri.go:89] found id: ""
	I1129 10:18:48.001789  492705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:18:48.030377  492705 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:18:48.030402  492705 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:18:48.030495  492705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:18:48.040905  492705 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:18:48.041749  492705 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-685516" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:18:48.042123  492705 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-685516" cluster setting kubeconfig missing "old-k8s-version-685516" context setting]
	I1129 10:18:48.042688  492705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:48.044731  492705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:18:48.057827  492705 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 10:18:48.057861  492705 kubeadm.go:602] duration metric: took 27.44143ms to restartPrimaryControlPlane
	I1129 10:18:48.057871  492705 kubeadm.go:403] duration metric: took 98.912523ms to StartCluster
	I1129 10:18:48.057887  492705 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:48.057952  492705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:18:48.058958  492705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:18:48.059211  492705 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:18:48.059486  492705 config.go:182] Loaded profile config "old-k8s-version-685516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 10:18:48.059527  492705 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:18:48.059591  492705 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-685516"
	I1129 10:18:48.059606  492705 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-685516"
	W1129 10:18:48.059613  492705 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:18:48.059643  492705 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:18:48.060078  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.060429  492705 addons.go:70] Setting dashboard=true in profile "old-k8s-version-685516"
	I1129 10:18:48.060456  492705 addons.go:239] Setting addon dashboard=true in "old-k8s-version-685516"
	W1129 10:18:48.060463  492705 addons.go:248] addon dashboard should already be in state true
	I1129 10:18:48.060486  492705 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:18:48.060894  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.061275  492705 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-685516"
	I1129 10:18:48.061298  492705 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-685516"
	I1129 10:18:48.061579  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.063091  492705 out.go:179] * Verifying Kubernetes components...
	I1129 10:18:48.066335  492705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:18:48.133721  492705 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-685516"
	W1129 10:18:48.133745  492705 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:18:48.133769  492705 host.go:66] Checking if "old-k8s-version-685516" exists ...
	I1129 10:18:48.134253  492705 cli_runner.go:164] Run: docker container inspect old-k8s-version-685516 --format={{.State.Status}}
	I1129 10:18:48.137318  492705 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:18:48.139294  492705 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:18:48.142294  492705 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:18:48.142319  492705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:18:48.142389  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:48.145294  492705 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:18:48.148222  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:18:48.148251  492705 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:18:48.148318  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:48.179325  492705 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:18:48.179354  492705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:18:48.179418  492705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-685516
	I1129 10:18:48.186327  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:48.228271  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:48.238678  492705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/old-k8s-version-685516/id_rsa Username:docker}
	I1129 10:18:48.418104  492705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:18:48.468090  492705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:18:48.474447  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:18:48.474472  492705 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:18:48.516254  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:18:48.516275  492705 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:18:48.536841  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:18:48.536866  492705 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:18:48.567502  492705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:18:48.623049  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:18:48.623075  492705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:18:48.709932  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:18:48.709958  492705 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:18:48.760035  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:18:48.760059  492705 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:18:48.808158  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:18:48.808182  492705 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:18:48.827438  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:18:48.827464  492705 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:18:48.849572  492705 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:18:48.849596  492705 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:18:48.869347  492705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:18:54.339082  492705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.920939133s)
	I1129 10:18:54.339146  492705 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.871031412s)
	I1129 10:18:54.339175  492705 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-685516" to be "Ready" ...
	I1129 10:18:54.339494  492705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.77196294s)
	I1129 10:18:54.381324  492705 node_ready.go:49] node "old-k8s-version-685516" is "Ready"
	I1129 10:18:54.381358  492705 node_ready.go:38] duration metric: took 42.152408ms for node "old-k8s-version-685516" to be "Ready" ...
	I1129 10:18:54.381372  492705 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:18:54.381436  492705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:18:55.340522  492705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.471133758s)
	I1129 10:18:55.340622  492705 api_server.go:72] duration metric: took 7.281380961s to wait for apiserver process to appear ...
	I1129 10:18:55.340652  492705 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:18:55.340700  492705 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:18:55.343846  492705 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-685516 addons enable metrics-server
	
	I1129 10:18:55.346941  492705 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1129 10:18:55.350068  492705 addons.go:530] duration metric: took 7.290534502s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1129 10:18:55.352127  492705 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 10:18:55.358859  492705 api_server.go:141] control plane version: v1.28.0
	I1129 10:18:55.358931  492705 api_server.go:131] duration metric: took 18.259786ms to wait for apiserver health ...
	I1129 10:18:55.358959  492705 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:18:55.367107  492705 system_pods.go:59] 8 kube-system pods found
	I1129 10:18:55.367194  492705 system_pods.go:61] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:55.367233  492705 system_pods.go:61] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:18:55.367260  492705 system_pods.go:61] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:18:55.367286  492705 system_pods.go:61] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:18:55.367320  492705 system_pods.go:61] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:18:55.367341  492705 system_pods.go:61] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:18:55.367370  492705 system_pods.go:61] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:18:55.367402  492705 system_pods.go:61] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:18:55.367423  492705 system_pods.go:74] duration metric: took 8.444257ms to wait for pod list to return data ...
	I1129 10:18:55.367447  492705 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:18:55.371509  492705 default_sa.go:45] found service account: "default"
	I1129 10:18:55.371574  492705 default_sa.go:55] duration metric: took 4.106763ms for default service account to be created ...
	I1129 10:18:55.371600  492705 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:18:55.377598  492705 system_pods.go:86] 8 kube-system pods found
	I1129 10:18:55.377675  492705 system_pods.go:89] "coredns-5dd5756b68-tpdzb" [29876dde-8614-4eb6-8b96-b3874f249d0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:18:55.377703  492705 system_pods.go:89] "etcd-old-k8s-version-685516" [28d58a06-d7d0-414a-8d48-4e6eb1c7839c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:18:55.377739  492705 system_pods.go:89] "kindnet-kjgl5" [1845614a-a695-4e01-9942-51df13c347cf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:18:55.377761  492705 system_pods.go:89] "kube-apiserver-old-k8s-version-685516" [0d4a0535-6b9a-44ef-a75c-d7029708e2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:18:55.377794  492705 system_pods.go:89] "kube-controller-manager-old-k8s-version-685516" [8f1a3558-3c51-4baa-8710-fcb11b781b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:18:55.377817  492705 system_pods.go:89] "kube-proxy-lqwmk" [40a4871d-ed30-4509-b7be-30f31f9bf40f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:18:55.377847  492705 system_pods.go:89] "kube-scheduler-old-k8s-version-685516" [30a62d45-c398-43e7-ac97-d427df9a78eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:18:55.377884  492705 system_pods.go:89] "storage-provisioner" [13c1253b-cf78-454d-a5a4-397e98f7ed48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:18:55.377917  492705 system_pods.go:126] duration metric: took 6.298271ms to wait for k8s-apps to be running ...
	I1129 10:18:55.377941  492705 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:18:55.378011  492705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:18:55.397842  492705 system_svc.go:56] duration metric: took 19.890952ms WaitForService to wait for kubelet
	I1129 10:18:55.397911  492705 kubeadm.go:587] duration metric: took 7.338670757s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:18:55.397947  492705 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:18:55.401846  492705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:18:55.401896  492705 node_conditions.go:123] node cpu capacity is 2
	I1129 10:18:55.401911  492705 node_conditions.go:105] duration metric: took 3.942273ms to run NodePressure ...
	I1129 10:18:55.401925  492705 start.go:242] waiting for startup goroutines ...
	I1129 10:18:55.401933  492705 start.go:247] waiting for cluster config update ...
	I1129 10:18:55.401956  492705 start.go:256] writing updated cluster config ...
	I1129 10:18:55.402300  492705 ssh_runner.go:195] Run: rm -f paused
	I1129 10:18:55.406312  492705 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:18:55.415151  492705 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-tpdzb" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:18:57.421365  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:18:59.421695  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:01.921155  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:03.921782  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:06.423720  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:08.921331  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:10.921664  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:12.921911  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:15.422231  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:17.922854  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:20.421178  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:22.421873  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:24.920955  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:27.426035  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:29.921336  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:32.420614  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	W1129 10:19:34.421308  492705 pod_ready.go:104] pod "coredns-5dd5756b68-tpdzb" is not "Ready", error: <nil>
	I1129 10:19:34.920905  492705 pod_ready.go:94] pod "coredns-5dd5756b68-tpdzb" is "Ready"
	I1129 10:19:34.920939  492705 pod_ready.go:86] duration metric: took 39.505756272s for pod "coredns-5dd5756b68-tpdzb" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.924531  492705 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.930115  492705 pod_ready.go:94] pod "etcd-old-k8s-version-685516" is "Ready"
	I1129 10:19:34.930143  492705 pod_ready.go:86] duration metric: took 5.580955ms for pod "etcd-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.933340  492705 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.938542  492705 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-685516" is "Ready"
	I1129 10:19:34.938570  492705 pod_ready.go:86] duration metric: took 5.203908ms for pod "kube-apiserver-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:34.941744  492705 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.118927  492705 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-685516" is "Ready"
	I1129 10:19:35.119026  492705 pod_ready.go:86] duration metric: took 177.251196ms for pod "kube-controller-manager-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.322010  492705 pod_ready.go:83] waiting for pod "kube-proxy-lqwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.718879  492705 pod_ready.go:94] pod "kube-proxy-lqwmk" is "Ready"
	I1129 10:19:35.718910  492705 pod_ready.go:86] duration metric: took 396.873466ms for pod "kube-proxy-lqwmk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:35.919844  492705 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:36.319360  492705 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-685516" is "Ready"
	I1129 10:19:36.319441  492705 pod_ready.go:86] duration metric: took 399.569104ms for pod "kube-scheduler-old-k8s-version-685516" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:19:36.319472  492705 pod_ready.go:40] duration metric: took 40.913125978s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:19:36.378561  492705 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1129 10:19:36.381636  492705 out.go:203] 
	W1129 10:19:36.384558  492705 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 10:19:36.387466  492705 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 10:19:36.390360  492705 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-685516" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.136665062Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.14039322Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.140432146Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.140459412Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.144177313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.144216632Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.14423895Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.147682748Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.147716422Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.147739421Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.152036579Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:19:35 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:35.152075086Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.413516247Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb0f55b1-46f5-4000-90c8-a104809e7943 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.41470369Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=af5b4e43-1ee3-4f0c-8faf-6fb530a3be64 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.418222336Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper" id=01061956-6d3c-4988-b22d-3bade4f8a154 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.418328774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.42648735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.42701836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.444931723Z" level=info msg="Created container f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper" id=01061956-6d3c-4988-b22d-3bade4f8a154 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.464887301Z" level=info msg="Starting container: f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738" id=4eeccfb5-fb56-4629-9d8c-bec55c328448 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.472363883Z" level=info msg="Started container" PID=1743 containerID=f82e905beeb9de6e6be427576633b765c2920181868b73fda048cc7f8f493738 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper id=4eeccfb5-fb56-4629-9d8c-bec55c328448 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f
	Nov 29 10:19:48 old-k8s-version-685516 conmon[1741]: conmon f82e905beeb9de6e6be4 <ninfo>: container 1743 exited with status 1
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.649483234Z" level=info msg="Removing container: afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e" id=d4ff9c52-8cb0-4bbc-8d78-828890003642 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.658611069Z" level=info msg="Error loading conmon cgroup of container afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e: cgroup deleted" id=d4ff9c52-8cb0-4bbc-8d78-828890003642 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:19:48 old-k8s-version-685516 crio[655]: time="2025-11-29T10:19:48.662828651Z" level=info msg="Removed container afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl/dashboard-metrics-scraper" id=d4ff9c52-8cb0-4bbc-8d78-828890003642 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f82e905beeb9d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago        Exited              dashboard-metrics-scraper   3                   8be51697a9b3c       dashboard-metrics-scraper-5f989dc9cf-hmsfl       kubernetes-dashboard
	0bd8bc1b44bd5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   de02613d72b8e       storage-provisioner                              kube-system
	9e58c1c388072       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   60e5572996f76       kubernetes-dashboard-8694d4445c-l7922            kubernetes-dashboard
	d96ba3e30860d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           58 seconds ago       Running             coredns                     1                   4fa5300dcb2e8       coredns-5dd5756b68-tpdzb                         kube-system
	31953dda15e48       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   9dd3851948c78       busybox                                          default
	b71f8c879d3e9       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           58 seconds ago       Running             kube-proxy                  1                   1285b0e589883       kube-proxy-lqwmk                                 kube-system
	c98f1ec102e6f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   de02613d72b8e       storage-provisioner                              kube-system
	928f645b1c52e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   7849cc4a909d5       kindnet-kjgl5                                    kube-system
	d1ccde7192273       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   1d2e99435ed55       kube-apiserver-old-k8s-version-685516            kube-system
	6424c9687943b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   3d1f64dec04dc       etcd-old-k8s-version-685516                      kube-system
	87200c874b1d7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   983f03aa59ee9       kube-controller-manager-old-k8s-version-685516   kube-system
	937bd7fba17a3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   c6d19de0d7809       kube-scheduler-old-k8s-version-685516            kube-system
	
	
	==> coredns [d96ba3e30860d33e2f34e2ae4074b7ceb1ace88087c8a2c37af7d1d051febb85] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37953 - 56312 "HINFO IN 2454487474150258179.9203577140186764541. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043690888s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-685516
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-685516
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-685516
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_17_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:17:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-685516
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:17:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:19:24 +0000   Sat, 29 Nov 2025 10:18:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-685516
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0e7612a5-7b98-4dd8-91b7-663bc5a3b138
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-5dd5756b68-tpdzb                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     118s
	  kube-system                 etcd-old-k8s-version-685516                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-kjgl5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-old-k8s-version-685516             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-old-k8s-version-685516    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-lqwmk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-old-k8s-version-685516             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-hmsfl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-l7922             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s (x8 over 2m18s)  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m10s                  kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m10s                  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m10s                  kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           119s                   node-controller  Node old-k8s-version-685516 event: Registered Node old-k8s-version-685516 in Controller
	  Normal  NodeReady                102s                   kubelet          Node old-k8s-version-685516 status is now: NodeReady
	  Normal  Starting                 66s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node old-k8s-version-685516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node old-k8s-version-685516 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node old-k8s-version-685516 event: Registered Node old-k8s-version-685516 in Controller
	
	
	==> dmesg <==
	[Nov29 09:47] overlayfs: idmapped layers are currently not supported
	[Nov29 09:51] overlayfs: idmapped layers are currently not supported
	[Nov29 09:52] overlayfs: idmapped layers are currently not supported
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6424c9687943bc7c9e2e9f3278b936d7cc5ac18aa5c44f37f9e424a325554a2d] <==
	{"level":"info","ts":"2025-11-29T10:18:48.614773Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-29T10:18:48.614781Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-29T10:18:48.61497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-29T10:18:48.61503Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-29T10:18:48.615802Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-29T10:18:48.615961Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T10:18:48.615982Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T10:18:48.616094Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-29T10:18:48.616101Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-29T10:18:48.617595Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:18:48.617721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T10:18:49.895222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-29T10:18:49.895349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-29T10:18:49.895403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-29T10:18:49.89544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.895488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.895526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.895557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-29T10:18:49.898373Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-685516 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T10:18:49.898467Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T10:18:49.899576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-29T10:18:49.898488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T10:18:49.907083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T10:18:49.910176Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T10:18:49.910257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:19:53 up  3:02,  0 user,  load average: 1.44, 2.33, 2.20
	Linux old-k8s-version-685516 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [928f645b1c52ecd3e1c677f3a9e0b0399d30209d88dd4d78fed8518ca1694aec] <==
	I1129 10:18:54.936338       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:18:54.936672       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:18:54.936807       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:18:54.936818       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:18:54.936830       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:18:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:18:55.131326       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:18:55.214199       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:18:55.214325       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:18:55.214526       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:19:25.134565       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:19:25.215222       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:19:25.215240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:19:25.216198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:19:26.714801       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:19:26.714889       1 metrics.go:72] Registering metrics
	I1129 10:19:26.715713       1 controller.go:711] "Syncing nftables rules"
	I1129 10:19:35.130141       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:19:35.130219       1 main.go:301] handling current node
	I1129 10:19:45.137131       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:19:45.137219       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d1ccde7192273d2c873fdd2640ce8601dedd0ee0717e723a71f05622c0cc2fd4] <==
	I1129 10:18:53.470149       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:18:53.477208       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1129 10:18:53.503560       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1129 10:18:53.503600       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1129 10:18:53.503608       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1129 10:18:53.503632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 10:18:53.509702       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 10:18:53.511283       1 aggregator.go:166] initial CRD sync complete...
	I1129 10:18:53.511319       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 10:18:53.511325       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:18:53.511333       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:18:53.512504       1 shared_informer.go:318] Caches are synced for configmaps
	I1129 10:18:53.512597       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1129 10:18:53.569676       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:18:54.108449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:18:55.124564       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 10:18:55.172466       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 10:18:55.200138       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:18:55.223281       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:18:55.234623       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 10:18:55.295413       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.30.232"}
	I1129 10:18:55.332868       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.156.246"}
	I1129 10:19:05.853368       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:19:06.103516       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1129 10:19:06.202708       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [87200c874b1d75d3c50007e8a0e4cceae03a3a03b20279aba051adafd491eeea] <==
	I1129 10:19:06.112985       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1129 10:19:06.321008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="460.096326ms"
	I1129 10:19:06.321124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.363µs"
	I1129 10:19:06.322003       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-l7922"
	I1129 10:19:06.322029       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-hmsfl"
	I1129 10:19:06.343934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="231.682795ms"
	I1129 10:19:06.352579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="243.923838ms"
	I1129 10:19:06.377810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="33.760166ms"
	I1129 10:19:06.382582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.804141ms"
	I1129 10:19:06.382672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.234µs"
	I1129 10:19:06.382675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.792µs"
	I1129 10:19:06.388533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="1.120989ms"
	I1129 10:19:06.406297       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 10:19:06.416417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.63µs"
	I1129 10:19:06.438241       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 10:19:06.438272       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 10:19:11.569493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.324µs"
	I1129 10:19:12.575803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="104.978µs"
	I1129 10:19:13.575161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.898µs"
	I1129 10:19:16.595448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.829815ms"
	I1129 10:19:16.595753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.654µs"
	I1129 10:19:27.620668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.143µs"
	I1129 10:19:34.778355       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.067933ms"
	I1129 10:19:34.778458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.871µs"
	I1129 10:19:36.649185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.628µs"
	
	
	==> kube-proxy [b71f8c879d3e9e455d4502f8326ee0cf5bf4bb869ea5e9562cf9852c0a2fe2af] <==
	I1129 10:18:55.105919       1 server_others.go:69] "Using iptables proxy"
	I1129 10:18:55.128200       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1129 10:18:55.190512       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:18:55.198880       1 server_others.go:152] "Using iptables Proxier"
	I1129 10:18:55.199065       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 10:18:55.199110       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 10:18:55.199176       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 10:18:55.200410       1 server.go:846] "Version info" version="v1.28.0"
	I1129 10:18:55.200488       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:18:55.204888       1 config.go:188] "Starting service config controller"
	I1129 10:18:55.204985       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 10:18:55.205028       1 config.go:97] "Starting endpoint slice config controller"
	I1129 10:18:55.205070       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 10:18:55.205633       1 config.go:315] "Starting node config controller"
	I1129 10:18:55.205705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 10:18:55.305951       1 shared_informer.go:318] Caches are synced for node config
	I1129 10:18:55.306069       1 shared_informer.go:318] Caches are synced for service config
	I1129 10:18:55.306233       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [937bd7fba17a336453e2bbe35345a1c8f3fd0dfc08b79ebfff1e6b375e4b15ca] <==
	I1129 10:18:50.441088       1 serving.go:348] Generated self-signed cert in-memory
	W1129 10:18:53.427348       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 10:18:53.427449       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:18:53.427483       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 10:18:53.427542       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 10:18:53.482722       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1129 10:18:53.482827       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:18:53.485041       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1129 10:18:53.485180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:18:53.485233       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1129 10:18:53.485350       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1129 10:18:53.585622       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 10:19:06 old-k8s-version-685516 kubelet[787]: I1129 10:19:06.420845     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q46nv\" (UniqueName: \"kubernetes.io/projected/86ddcc94-563b-4476-a666-88ed2b609a9c-kube-api-access-q46nv\") pod \"dashboard-metrics-scraper-5f989dc9cf-hmsfl\" (UID: \"86ddcc94-563b-4476-a666-88ed2b609a9c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl"
	Nov 29 10:19:06 old-k8s-version-685516 kubelet[787]: W1129 10:19:06.660805     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/crio-8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f WatchSource:0}: Error finding container 8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f: Status 404 returned error can't find the container with id 8be51697a9b3cb3b39b8aaf4949087e5e39ca35e14c6d3b041ae178756f6484f
	Nov 29 10:19:06 old-k8s-version-685516 kubelet[787]: W1129 10:19:06.683068     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e87cb8cc4025f3b44ddc5930a4b3c40f1c42ddc43da79be7013ef6fc8afa3eda/crio-60e5572996f7668becb9816a22fdd574742ff45a60d6c63b962e445cd994ba1a WatchSource:0}: Error finding container 60e5572996f7668becb9816a22fdd574742ff45a60d6c63b962e445cd994ba1a: Status 404 returned error can't find the container with id 60e5572996f7668becb9816a22fdd574742ff45a60d6c63b962e445cd994ba1a
	Nov 29 10:19:11 old-k8s-version-685516 kubelet[787]: I1129 10:19:11.549449     787 scope.go:117] "RemoveContainer" containerID="246fd874e94146103e7b36c94d06b1f916edb06f082c9448d524e05dead0c91d"
	Nov 29 10:19:12 old-k8s-version-685516 kubelet[787]: I1129 10:19:12.554243     787 scope.go:117] "RemoveContainer" containerID="246fd874e94146103e7b36c94d06b1f916edb06f082c9448d524e05dead0c91d"
	Nov 29 10:19:12 old-k8s-version-685516 kubelet[787]: I1129 10:19:12.554530     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:12 old-k8s-version-685516 kubelet[787]: E1129 10:19:12.554794     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:13 old-k8s-version-685516 kubelet[787]: I1129 10:19:13.559064     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:13 old-k8s-version-685516 kubelet[787]: E1129 10:19:13.559330     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:16 old-k8s-version-685516 kubelet[787]: I1129 10:19:16.631037     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:16 old-k8s-version-685516 kubelet[787]: E1129 10:19:16.631373     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:25 old-k8s-version-685516 kubelet[787]: I1129 10:19:25.588136     787 scope.go:117] "RemoveContainer" containerID="c98f1ec102e6f68de9416d8a551b299409994ef27c0a89617c909e25d050a785"
	Nov 29 10:19:25 old-k8s-version-685516 kubelet[787]: I1129 10:19:25.617823     787 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l7922" podStartSLOduration=10.618616933 podCreationTimestamp="2025-11-29 10:19:06 +0000 UTC" firstStartedPulling="2025-11-29 10:19:06.688313786 +0000 UTC m=+19.460497291" lastFinishedPulling="2025-11-29 10:19:15.686456148 +0000 UTC m=+28.458639661" observedRunningTime="2025-11-29 10:19:16.580294871 +0000 UTC m=+29.352478384" watchObservedRunningTime="2025-11-29 10:19:25.616759303 +0000 UTC m=+38.388942807"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: I1129 10:19:27.413891     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: I1129 10:19:27.596586     787 scope.go:117] "RemoveContainer" containerID="1cee62c190a33ad78596c2742a026d946696dd504995274575e014d3510c0b29"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: I1129 10:19:27.596988     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:27 old-k8s-version-685516 kubelet[787]: E1129 10:19:27.597325     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:36 old-k8s-version-685516 kubelet[787]: I1129 10:19:36.631025     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:36 old-k8s-version-685516 kubelet[787]: E1129 10:19:36.631336     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hmsfl_kubernetes-dashboard(86ddcc94-563b-4476-a666-88ed2b609a9c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hmsfl" podUID="86ddcc94-563b-4476-a666-88ed2b609a9c"
	Nov 29 10:19:48 old-k8s-version-685516 kubelet[787]: I1129 10:19:48.412862     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:48 old-k8s-version-685516 kubelet[787]: I1129 10:19:48.647606     787 scope.go:117] "RemoveContainer" containerID="afe6e08f27445d37ddb502b952f6041d19eb467b1ad0139172bb7629cf68014e"
	Nov 29 10:19:48 old-k8s-version-685516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:19:48 old-k8s-version-685516 kubelet[787]: I1129 10:19:48.702126     787 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 29 10:19:48 old-k8s-version-685516 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:19:48 old-k8s-version-685516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9e58c1c388072eb6bdd4fe933f31cef732fec984bbe7e352218d505d47ad45b3] <==
	2025/11/29 10:19:15 Starting overwatch
	2025/11/29 10:19:15 Using namespace: kubernetes-dashboard
	2025/11/29 10:19:15 Using in-cluster config to connect to apiserver
	2025/11/29 10:19:15 Using secret token for csrf signing
	2025/11/29 10:19:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:19:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:19:15 Successful initial request to the apiserver, version: v1.28.0
	2025/11/29 10:19:15 Generating JWE encryption key
	2025/11/29 10:19:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:19:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:19:16 Initializing JWE encryption key from synchronized object
	2025/11/29 10:19:16 Creating in-cluster Sidecar client
	2025/11/29 10:19:16 Serving insecurely on HTTP port: 9090
	2025/11/29 10:19:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:19:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0bd8bc1b44bd5ffbfce43bb9a656fe55922bf48d04059d3e194d0dd90434d60c] <==
	I1129 10:19:25.638929       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:19:25.651804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:19:25.651862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 10:19:43.055877       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:19:43.056045       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-685516_9812c4f6-45b8-450f-a1a2-77a2edef42e0!
	I1129 10:19:43.056672       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"72126377-2253-4702-b95c-9156b1e866c0", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-685516_9812c4f6-45b8-450f-a1a2-77a2edef42e0 became leader
	I1129 10:19:43.157009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-685516_9812c4f6-45b8-450f-a1a2-77a2edef42e0!
	
	
	==> storage-provisioner [c98f1ec102e6f68de9416d8a551b299409994ef27c0a89617c909e25d050a785] <==
	I1129 10:18:54.989839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:19:24.996072       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-685516 -n old-k8s-version-685516
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-685516 -n old-k8s-version-685516: exit status 2 (375.069058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-685516 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.216239ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:21:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-708011 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-708011 describe deploy/metrics-server -n kube-system: exit status 1 (84.56088ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-708011 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-708011
helpers_test.go:243: (dbg) docker inspect embed-certs-708011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a",
	        "Created": "2025-11-29T10:20:04.082616861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:20:04.165314859Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/hosts",
	        "LogPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a-json.log",
	        "Name": "/embed-certs-708011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-708011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-708011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a",
	                "LowerDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-708011",
	                "Source": "/var/lib/docker/volumes/embed-certs-708011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-708011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-708011",
	                "name.minikube.sigs.k8s.io": "embed-certs-708011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "123809adf1fc3afacaf732e9425eb262eabe1a7ba267cfdbc1585470667ce565",
	            "SandboxKey": "/var/run/docker/netns/123809adf1fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-708011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:71:c0:e3:77:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71caade6f8e792ec8d9dce1f07288f08e50f74b2f8fdf0dbf488e545467ec977",
	                    "EndpointID": "4f919e79fbb8fd5ccc23521a514855598d1f0e74b9548d3462f85d388b915162",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-708011",
	                        "f6641e3603d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-708011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-708011 logs -n 25: (1.209685434s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-151203 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ ssh     │ -p cilium-151203 sudo crio config                                                                                                                                                                                                             │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │                     │
	│ delete  │ -p cilium-151203                                                                                                                                                                                                                              │ cilium-151203            │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:15 UTC │
	│ start   │ -p force-systemd-env-510051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:15 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p force-systemd-env-510051                                                                                                                                                                                                                   │ force-systemd-env-510051 │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-930117   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ delete  │ -p running-upgrade-493711                                                                                                                                                                                                                     │ running-upgrade-493711   │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-033056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-033056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-033056                                                                                                                                                                                                                        │ cert-options-033056      │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-685516 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-685516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:19 UTC │
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                                                                                               │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-930117   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516   │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-708011       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:19:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:19:58.027130  496520 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:19:58.027252  496520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:19:58.027262  496520 out.go:374] Setting ErrFile to fd 2...
	I1129 10:19:58.027268  496520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:19:58.027516  496520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:19:58.027929  496520 out.go:368] Setting JSON to false
	I1129 10:19:58.028797  496520 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10947,"bootTime":1764400651,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:19:58.028873  496520 start.go:143] virtualization:  
	I1129 10:19:58.032557  496520 out.go:179] * [embed-certs-708011] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:19:58.035668  496520 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:19:58.035762  496520 notify.go:221] Checking for updates...
	I1129 10:19:58.041469  496520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:19:58.044668  496520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:19:58.047693  496520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:19:58.051208  496520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:19:58.054318  496520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:19:58.057792  496520 config.go:182] Loaded profile config "cert-expiration-930117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:19:58.057953  496520 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:19:58.089424  496520 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:19:58.089549  496520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:19:58.152822  496520 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:19:58.136286985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:19:58.152942  496520 docker.go:319] overlay module found
	I1129 10:19:58.156244  496520 out.go:179] * Using the docker driver based on user configuration
	I1129 10:19:58.159248  496520 start.go:309] selected driver: docker
	I1129 10:19:58.159272  496520 start.go:927] validating driver "docker" against <nil>
	I1129 10:19:58.159286  496520 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:19:58.160050  496520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:19:58.219467  496520 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:19:58.210253057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:19:58.219629  496520 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 10:19:58.219837  496520 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:19:58.222790  496520 out.go:179] * Using Docker driver with root privileges
	I1129 10:19:58.225725  496520 cni.go:84] Creating CNI manager for ""
	I1129 10:19:58.225804  496520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:19:58.225823  496520 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 10:19:58.225916  496520 start.go:353] cluster config:
	{Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:19:58.229201  496520 out.go:179] * Starting "embed-certs-708011" primary control-plane node in "embed-certs-708011" cluster
	I1129 10:19:58.232168  496520 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:19:58.235256  496520 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:19:58.238209  496520 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:19:58.238264  496520 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:19:58.238278  496520 cache.go:65] Caching tarball of preloaded images
	I1129 10:19:58.238276  496520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:19:58.238366  496520 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:19:58.238386  496520 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:19:58.238494  496520 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/config.json ...
	I1129 10:19:58.238512  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/config.json: {Name:mk216980c71b5260c929c768baf0b7fb2b3b17f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:19:58.259170  496520 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:19:58.259201  496520 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:19:58.259222  496520 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:19:58.259253  496520 start.go:360] acquireMachinesLock for embed-certs-708011: {Name:mk28b8fbcf8e50d916cc0f4f061f142dc9ca3264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:19:58.259373  496520 start.go:364] duration metric: took 98.709µs to acquireMachinesLock for "embed-certs-708011"
	I1129 10:19:58.259406  496520 start.go:93] Provisioning new machine with config: &{Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:19:58.259481  496520 start.go:125] createHost starting for "" (driver="docker")
	I1129 10:19:55.848232  496067 out.go:252] * Updating the running docker "cert-expiration-930117" container ...
	I1129 10:19:55.848256  496067 machine.go:94] provisionDockerMachine start ...
	I1129 10:19:55.848360  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:19:55.865893  496067 main.go:143] libmachine: Using SSH client type: native
	I1129 10:19:55.866286  496067 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1129 10:19:55.866294  496067 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:19:56.023156  496067 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-930117
	
	I1129 10:19:56.023172  496067 ubuntu.go:182] provisioning hostname "cert-expiration-930117"
	I1129 10:19:56.023243  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:19:56.045530  496067 main.go:143] libmachine: Using SSH client type: native
	I1129 10:19:56.045842  496067 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1129 10:19:56.045851  496067 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-930117 && echo "cert-expiration-930117" | sudo tee /etc/hostname
	I1129 10:19:56.213326  496067 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-930117
	
	I1129 10:19:56.213423  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:19:56.236647  496067 main.go:143] libmachine: Using SSH client type: native
	I1129 10:19:56.236959  496067 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1129 10:19:56.236973  496067 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-930117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-930117/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-930117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:19:56.406728  496067 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:19:56.406745  496067 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:19:56.406764  496067 ubuntu.go:190] setting up certificates
	I1129 10:19:56.406772  496067 provision.go:84] configureAuth start
	I1129 10:19:56.406840  496067 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-930117
	I1129 10:19:56.429177  496067 provision.go:143] copyHostCerts
	I1129 10:19:56.429290  496067 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:19:56.429303  496067 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:19:56.429460  496067 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:19:56.429571  496067 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:19:56.429576  496067 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:19:56.429604  496067 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:19:56.429662  496067 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:19:56.429665  496067 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:19:56.429687  496067 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:19:56.429740  496067 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-930117 san=[127.0.0.1 192.168.76.2 cert-expiration-930117 localhost minikube]
	I1129 10:19:56.939000  496067 provision.go:177] copyRemoteCerts
	I1129 10:19:56.939054  496067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:19:56.939093  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:19:56.958186  496067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/cert-expiration-930117/id_rsa Username:docker}
	I1129 10:19:57.078761  496067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:19:57.108190  496067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 10:19:57.127670  496067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 10:19:57.147533  496067 provision.go:87] duration metric: took 740.738395ms to configureAuth
	I1129 10:19:57.147559  496067 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:19:57.147763  496067 config.go:182] Loaded profile config "cert-expiration-930117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:19:57.147886  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:19:57.170903  496067 main.go:143] libmachine: Using SSH client type: native
	I1129 10:19:57.171291  496067 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33411 <nil> <nil>}
	I1129 10:19:57.171304  496067 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:19:58.262954  496520 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 10:19:58.263201  496520 start.go:159] libmachine.API.Create for "embed-certs-708011" (driver="docker")
	I1129 10:19:58.263239  496520 client.go:173] LocalClient.Create starting
	I1129 10:19:58.263312  496520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 10:19:58.263355  496520 main.go:143] libmachine: Decoding PEM data...
	I1129 10:19:58.263373  496520 main.go:143] libmachine: Parsing certificate...
	I1129 10:19:58.263443  496520 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 10:19:58.263468  496520 main.go:143] libmachine: Decoding PEM data...
	I1129 10:19:58.263490  496520 main.go:143] libmachine: Parsing certificate...
	I1129 10:19:58.263861  496520 cli_runner.go:164] Run: docker network inspect embed-certs-708011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 10:19:58.279553  496520 cli_runner.go:211] docker network inspect embed-certs-708011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 10:19:58.279651  496520 network_create.go:284] running [docker network inspect embed-certs-708011] to gather additional debugging logs...
	I1129 10:19:58.279675  496520 cli_runner.go:164] Run: docker network inspect embed-certs-708011
	W1129 10:19:58.295082  496520 cli_runner.go:211] docker network inspect embed-certs-708011 returned with exit code 1
	I1129 10:19:58.295112  496520 network_create.go:287] error running [docker network inspect embed-certs-708011]: docker network inspect embed-certs-708011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-708011 not found
	I1129 10:19:58.295125  496520 network_create.go:289] output of [docker network inspect embed-certs-708011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-708011 not found
	
	** /stderr **
	I1129 10:19:58.295220  496520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:19:58.311839  496520 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
	I1129 10:19:58.312223  496520 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf66364546bb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1a:25:6d:94:37:dd} reservation:<nil>}
	I1129 10:19:58.312478  496520 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d78444b552f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:b1:d6:7c:04:eb} reservation:<nil>}
	I1129 10:19:58.312752  496520 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-da32c907c77f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:43:2d:60:a7:95} reservation:<nil>}
	I1129 10:19:58.313172  496520 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a35e00}
	I1129 10:19:58.313197  496520 network_create.go:124] attempt to create docker network embed-certs-708011 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 10:19:58.313255  496520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-708011 embed-certs-708011
	I1129 10:19:58.373672  496520 network_create.go:108] docker network embed-certs-708011 192.168.85.0/24 created
	I1129 10:19:58.373704  496520 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-708011" container
	I1129 10:19:58.373783  496520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 10:19:58.391336  496520 cli_runner.go:164] Run: docker volume create embed-certs-708011 --label name.minikube.sigs.k8s.io=embed-certs-708011 --label created_by.minikube.sigs.k8s.io=true
	I1129 10:19:58.408887  496520 oci.go:103] Successfully created a docker volume embed-certs-708011
	I1129 10:19:58.408990  496520 cli_runner.go:164] Run: docker run --rm --name embed-certs-708011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-708011 --entrypoint /usr/bin/test -v embed-certs-708011:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 10:19:58.964078  496520 oci.go:107] Successfully prepared a docker volume embed-certs-708011
	I1129 10:19:58.964134  496520 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:19:58.964144  496520 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 10:19:58.964207  496520 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-708011:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 10:20:02.991691  496067 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:20:02.991703  496067 machine.go:97] duration metric: took 7.143439961s to provisionDockerMachine
	I1129 10:20:02.991713  496067 start.go:293] postStartSetup for "cert-expiration-930117" (driver="docker")
	I1129 10:20:02.991723  496067 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:20:02.991782  496067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:20:02.991836  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:20:03.016132  496067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/cert-expiration-930117/id_rsa Username:docker}
	I1129 10:20:03.126521  496067 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:20:03.130133  496067 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:20:03.130150  496067 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:20:03.130160  496067 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:20:03.130229  496067 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:20:03.130308  496067 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:20:03.130411  496067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:20:03.138560  496067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:20:03.157411  496067 start.go:296] duration metric: took 165.683407ms for postStartSetup
	I1129 10:20:03.157485  496067 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:20:03.157522  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:20:03.176170  496067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/cert-expiration-930117/id_rsa Username:docker}
	I1129 10:20:03.279371  496067 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:20:03.285073  496067 fix.go:56] duration metric: took 7.457039022s for fixHost
	I1129 10:20:03.285090  496067 start.go:83] releasing machines lock for "cert-expiration-930117", held for 7.457077291s
	I1129 10:20:03.285159  496067 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-930117
	I1129 10:20:03.301629  496067 ssh_runner.go:195] Run: cat /version.json
	I1129 10:20:03.301710  496067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:20:03.301720  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:20:03.301759  496067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-930117
	I1129 10:20:03.323311  496067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/cert-expiration-930117/id_rsa Username:docker}
	I1129 10:20:03.324680  496067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33411 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/cert-expiration-930117/id_rsa Username:docker}
	I1129 10:20:03.574217  496067 ssh_runner.go:195] Run: systemctl --version
	I1129 10:20:03.580757  496067 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:20:03.644235  496067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:20:03.655375  496067 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:20:03.655448  496067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:20:03.664670  496067 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:20:03.664686  496067 start.go:496] detecting cgroup driver to use...
	I1129 10:20:03.664718  496067 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:20:03.664765  496067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:20:03.680449  496067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:20:03.694944  496067 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:20:03.694997  496067 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:20:03.710874  496067 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:20:03.724807  496067 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:20:03.869628  496067 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:20:04.034812  496067 docker.go:234] disabling docker service ...
	I1129 10:20:04.034894  496067 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:20:04.054521  496067 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:20:04.074611  496067 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:20:04.274516  496067 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:20:04.441082  496067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:20:04.455307  496067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:20:04.475005  496067 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:20:04.475071  496067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:04.490594  496067 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:20:04.490652  496067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:04.519413  496067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:04.534299  496067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:04.557938  496067 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:20:04.576099  496067 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:04.604731  496067 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:04.616336  496067 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:04.638989  496067 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:20:04.653732  496067 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:20:04.662724  496067 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:20:05.040498  496067 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:20:03.969905  496520 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-708011:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.005655444s)
	I1129 10:20:03.969937  496520 kic.go:203] duration metric: took 5.005789534s to extract preloaded images to volume ...
	W1129 10:20:03.970092  496520 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 10:20:03.970227  496520 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 10:20:04.061039  496520 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-708011 --name embed-certs-708011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-708011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-708011 --network embed-certs-708011 --ip 192.168.85.2 --volume embed-certs-708011:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 10:20:04.499808  496520 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Running}}
	I1129 10:20:04.531089  496520 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:20:04.561466  496520 cli_runner.go:164] Run: docker exec embed-certs-708011 stat /var/lib/dpkg/alternatives/iptables
	I1129 10:20:04.629998  496520 oci.go:144] the created container "embed-certs-708011" has a running status.
	I1129 10:20:04.630026  496520 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa...
	I1129 10:20:04.892484  496520 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 10:20:04.933087  496520 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:20:04.973749  496520 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 10:20:04.973768  496520 kic_runner.go:114] Args: [docker exec --privileged embed-certs-708011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 10:20:05.067667  496520 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:20:05.095595  496520 machine.go:94] provisionDockerMachine start ...
	I1129 10:20:05.095681  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:05.125658  496520 main.go:143] libmachine: Using SSH client type: native
	I1129 10:20:05.126004  496520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1129 10:20:05.126013  496520 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:20:05.126752  496520 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55954->127.0.0.1:33431: read: connection reset by peer
	I1129 10:20:08.281731  496520 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-708011
	
	I1129 10:20:08.281757  496520 ubuntu.go:182] provisioning hostname "embed-certs-708011"
	I1129 10:20:08.281832  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:08.298742  496520 main.go:143] libmachine: Using SSH client type: native
	I1129 10:20:08.299067  496520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1129 10:20:08.299085  496520 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-708011 && echo "embed-certs-708011" | sudo tee /etc/hostname
	I1129 10:20:08.459009  496520 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-708011
	
	I1129 10:20:08.459095  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:08.477045  496520 main.go:143] libmachine: Using SSH client type: native
	I1129 10:20:08.477346  496520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1129 10:20:08.477361  496520 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-708011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-708011/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-708011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:20:08.626408  496520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:20:08.626436  496520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:20:08.626471  496520 ubuntu.go:190] setting up certificates
	I1129 10:20:08.626481  496520 provision.go:84] configureAuth start
	I1129 10:20:08.626550  496520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-708011
	I1129 10:20:08.645169  496520 provision.go:143] copyHostCerts
	I1129 10:20:08.645255  496520 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:20:08.645273  496520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:20:08.645365  496520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:20:08.645471  496520 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:20:08.645480  496520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:20:08.645509  496520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:20:08.645572  496520 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:20:08.645581  496520 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:20:08.645607  496520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:20:08.645661  496520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.embed-certs-708011 san=[127.0.0.1 192.168.85.2 embed-certs-708011 localhost minikube]
	I1129 10:20:09.046394  496520 provision.go:177] copyRemoteCerts
	I1129 10:20:09.046470  496520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:20:09.046512  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:09.063769  496520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:20:09.169863  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:20:09.187677  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:20:09.208194  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:20:09.226661  496520 provision.go:87] duration metric: took 600.149173ms to configureAuth
	I1129 10:20:09.226692  496520 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:20:09.226888  496520 config.go:182] Loaded profile config "embed-certs-708011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:20:09.226999  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:09.243993  496520 main.go:143] libmachine: Using SSH client type: native
	I1129 10:20:09.244351  496520 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1129 10:20:09.244373  496520 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:20:09.630137  496520 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:20:09.630163  496520 machine.go:97] duration metric: took 4.534547782s to provisionDockerMachine
	I1129 10:20:09.630175  496520 client.go:176] duration metric: took 11.366925632s to LocalClient.Create
	I1129 10:20:09.630187  496520 start.go:167] duration metric: took 11.366986728s to libmachine.API.Create "embed-certs-708011"
	I1129 10:20:09.630201  496520 start.go:293] postStartSetup for "embed-certs-708011" (driver="docker")
	I1129 10:20:09.630212  496520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:20:09.630281  496520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:20:09.630326  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:09.648453  496520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:20:09.754300  496520 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:20:09.757871  496520 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:20:09.757899  496520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:20:09.757911  496520 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:20:09.757969  496520 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:20:09.758061  496520 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:20:09.758210  496520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:20:09.766620  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:20:09.785191  496520 start.go:296] duration metric: took 154.974419ms for postStartSetup
	I1129 10:20:09.785573  496520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-708011
	I1129 10:20:09.804151  496520 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/config.json ...
	I1129 10:20:09.804441  496520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:20:09.804493  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:09.821725  496520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:20:09.923196  496520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:20:09.928118  496520 start.go:128] duration metric: took 11.668620713s to createHost
	I1129 10:20:09.928149  496520 start.go:83] releasing machines lock for "embed-certs-708011", held for 11.668759578s
	I1129 10:20:09.928222  496520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-708011
	I1129 10:20:09.945004  496520 ssh_runner.go:195] Run: cat /version.json
	I1129 10:20:09.945060  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:09.945276  496520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:20:09.945356  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:09.963230  496520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:20:09.964536  496520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:20:10.070092  496520 ssh_runner.go:195] Run: systemctl --version
	I1129 10:20:10.163919  496520 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:20:10.205703  496520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:20:10.210382  496520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:20:10.210498  496520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:20:10.242652  496520 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 10:20:10.242721  496520 start.go:496] detecting cgroup driver to use...
	I1129 10:20:10.242773  496520 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:20:10.242837  496520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:20:10.260886  496520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:20:10.274177  496520 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:20:10.274284  496520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:20:10.292684  496520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:20:10.311540  496520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:20:10.442782  496520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:20:10.567076  496520 docker.go:234] disabling docker service ...
	I1129 10:20:10.567173  496520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:20:10.588314  496520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:20:10.602494  496520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:20:10.727709  496520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:20:10.839003  496520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:20:10.851763  496520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:20:10.865901  496520 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:20:10.865966  496520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:10.874846  496520 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:20:10.874961  496520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:10.883701  496520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:10.892461  496520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:10.901698  496520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:20:10.910104  496520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:10.919078  496520 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:10.933356  496520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:20:10.942629  496520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:20:10.950802  496520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:20:10.958372  496520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:20:11.074494  496520 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:20:11.265924  496520 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:20:11.266022  496520 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:20:11.269675  496520 start.go:564] Will wait 60s for crictl version
	I1129 10:20:11.269779  496520 ssh_runner.go:195] Run: which crictl
	I1129 10:20:11.273217  496520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:20:11.303021  496520 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:20:11.303122  496520 ssh_runner.go:195] Run: crio --version
	I1129 10:20:11.330970  496520 ssh_runner.go:195] Run: crio --version
	I1129 10:20:11.368941  496520 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:20:11.371741  496520 cli_runner.go:164] Run: docker network inspect embed-certs-708011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:20:11.388261  496520 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 10:20:11.392061  496520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:20:11.402109  496520 kubeadm.go:884] updating cluster {Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:20:11.402244  496520 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:20:11.402305  496520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:20:11.450736  496520 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:20:11.450761  496520 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:20:11.450819  496520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:20:11.483722  496520 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:20:11.483748  496520 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:20:11.483755  496520 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 10:20:11.483844  496520 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-708011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:20:11.483931  496520 ssh_runner.go:195] Run: crio config
	I1129 10:20:11.544763  496520 cni.go:84] Creating CNI manager for ""
	I1129 10:20:11.544801  496520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:20:11.544829  496520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:20:11.544858  496520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-708011 NodeName:embed-certs-708011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:20:11.544984  496520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-708011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:20:11.545058  496520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:20:11.554036  496520 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:20:11.554136  496520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:20:11.561567  496520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1129 10:20:11.574521  496520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:20:11.587591  496520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1129 10:20:11.600805  496520 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:20:11.604692  496520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:20:11.614493  496520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:20:11.732490  496520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:20:11.749283  496520 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011 for IP: 192.168.85.2
	I1129 10:20:11.749354  496520 certs.go:195] generating shared ca certs ...
	I1129 10:20:11.749385  496520 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:11.749580  496520 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:20:11.749657  496520 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:20:11.749697  496520 certs.go:257] generating profile certs ...
	I1129 10:20:11.749791  496520 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.key
	I1129 10:20:11.749824  496520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.crt with IP's: []
	I1129 10:20:12.324445  496520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.crt ...
	I1129 10:20:12.324478  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.crt: {Name:mk757e18015fa93ce8f6d33ca6d781345a32f56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:12.324739  496520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.key ...
	I1129 10:20:12.324766  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.key: {Name:mkd7fa5a3f68ced2f8dd89b94d364afd18103f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:12.324923  496520 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key.704f8259
	I1129 10:20:12.324943  496520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt.704f8259 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 10:20:12.426030  496520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt.704f8259 ...
	I1129 10:20:12.426063  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt.704f8259: {Name:mke3fff7b1bbd8e13a84a101074a47b2137e85ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:12.426253  496520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key.704f8259 ...
	I1129 10:20:12.426269  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key.704f8259: {Name:mk4a1bc1cd828db3a23c4da2670c6658d9a01269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:12.426370  496520 certs.go:382] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt.704f8259 -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt
	I1129 10:20:12.426454  496520 certs.go:386] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key.704f8259 -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key
	I1129 10:20:12.426517  496520 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key
	I1129 10:20:12.426535  496520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.crt with IP's: []
	I1129 10:20:12.526530  496520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.crt ...
	I1129 10:20:12.526559  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.crt: {Name:mk5dbd2f198d97c4a5fa15a6345d2a1e1c256267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:12.526739  496520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key ...
	I1129 10:20:12.526754  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key: {Name:mk47cf244fef85e8fc1e725e6948c837a7db1b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:12.526948  496520 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:20:12.527005  496520 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:20:12.527019  496520 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:20:12.527050  496520 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:20:12.527081  496520 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:20:12.527105  496520 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:20:12.527151  496520 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:20:12.527751  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:20:12.546841  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:20:12.565274  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:20:12.583905  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:20:12.601836  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 10:20:12.620164  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 10:20:12.637829  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:20:12.655772  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 10:20:12.676809  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:20:12.696781  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:20:12.716584  496520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:20:12.734899  496520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:20:12.748554  496520 ssh_runner.go:195] Run: openssl version
	I1129 10:20:12.754947  496520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:20:12.763018  496520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:20:12.766595  496520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:20:12.766665  496520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:20:12.807854  496520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:20:12.815866  496520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:20:12.824136  496520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:20:12.828285  496520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:20:12.828349  496520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:20:12.869297  496520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:20:12.877793  496520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:20:12.886403  496520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:20:12.890224  496520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:20:12.890366  496520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:20:12.932722  496520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:20:12.941474  496520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:20:12.945116  496520 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 10:20:12.945171  496520 kubeadm.go:401] StartCluster: {Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:20:12.945244  496520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:20:12.945306  496520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:20:12.973283  496520 cri.go:89] found id: ""
	I1129 10:20:12.973367  496520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:20:12.981265  496520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 10:20:12.989044  496520 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 10:20:12.989116  496520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 10:20:12.997146  496520 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 10:20:12.997168  496520 kubeadm.go:158] found existing configuration files:
	
	I1129 10:20:12.997248  496520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 10:20:13.005758  496520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 10:20:13.005855  496520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 10:20:13.014487  496520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 10:20:13.022664  496520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 10:20:13.022776  496520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 10:20:13.030696  496520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 10:20:13.039136  496520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 10:20:13.039218  496520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 10:20:13.046856  496520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 10:20:13.054826  496520 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 10:20:13.054923  496520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 10:20:13.062789  496520 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 10:20:13.100670  496520 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 10:20:13.100953  496520 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 10:20:13.125549  496520 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 10:20:13.125626  496520 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 10:20:13.125673  496520 kubeadm.go:319] OS: Linux
	I1129 10:20:13.125724  496520 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 10:20:13.125776  496520 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 10:20:13.125827  496520 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 10:20:13.125879  496520 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 10:20:13.125930  496520 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 10:20:13.125981  496520 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 10:20:13.126031  496520 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 10:20:13.126112  496520 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 10:20:13.126170  496520 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 10:20:13.192172  496520 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 10:20:13.192313  496520 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 10:20:13.192417  496520 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 10:20:13.199720  496520 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 10:20:13.202953  496520 out.go:252]   - Generating certificates and keys ...
	I1129 10:20:13.203114  496520 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 10:20:13.203235  496520 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 10:20:13.816545  496520 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 10:20:14.286792  496520 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 10:20:14.514446  496520 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 10:20:14.645390  496520 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 10:20:14.823891  496520 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 10:20:14.824085  496520 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-708011 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 10:20:15.716179  496520 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 10:20:15.716648  496520 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-708011 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 10:20:15.873153  496520 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 10:20:16.826376  496520 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 10:20:17.834639  496520 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:20:17.834935  496520 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:20:18.306617  496520 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:20:18.850394  496520 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 10:20:19.227326  496520 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:20:19.518054  496520 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:20:19.858657  496520 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:20:19.859283  496520 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:20:19.861988  496520 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 10:20:19.865760  496520 out.go:252]   - Booting up control plane ...
	I1129 10:20:19.865862  496520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:20:19.865939  496520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:20:19.866004  496520 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:20:19.881381  496520 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:20:19.881734  496520 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 10:20:19.892231  496520 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 10:20:19.892671  496520 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:20:19.892729  496520 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:20:20.024486  496520 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 10:20:20.025084  496520 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 10:20:21.529671  496520 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502024707s
	I1129 10:20:21.531501  496520 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 10:20:21.531864  496520 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1129 10:20:21.532720  496520 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 10:20:21.533029  496520 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 10:20:24.939280  496520 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.405808965s
	I1129 10:20:26.880918  496520 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.347224971s
	I1129 10:20:28.034783  496520 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501456346s
	I1129 10:20:28.055942  496520 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:20:28.070876  496520 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:20:28.090494  496520 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:20:28.090736  496520 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-708011 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:20:28.105589  496520 kubeadm.go:319] [bootstrap-token] Using token: wp50z4.krk3ib1wxfpld591
	I1129 10:20:28.108605  496520 out.go:252]   - Configuring RBAC rules ...
	I1129 10:20:28.108731  496520 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:20:28.118678  496520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:20:28.130618  496520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:20:28.137439  496520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:20:28.142631  496520 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:20:28.147509  496520 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:20:28.442816  496520 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:20:28.884229  496520 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:20:29.443079  496520 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:20:29.444552  496520 kubeadm.go:319] 
	I1129 10:20:29.444643  496520 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:20:29.444661  496520 kubeadm.go:319] 
	I1129 10:20:29.444755  496520 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:20:29.444764  496520 kubeadm.go:319] 
	I1129 10:20:29.444790  496520 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:20:29.444849  496520 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:20:29.444899  496520 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:20:29.444903  496520 kubeadm.go:319] 
	I1129 10:20:29.444965  496520 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:20:29.444970  496520 kubeadm.go:319] 
	I1129 10:20:29.445018  496520 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:20:29.445022  496520 kubeadm.go:319] 
	I1129 10:20:29.445073  496520 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:20:29.445148  496520 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:20:29.445230  496520 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:20:29.445236  496520 kubeadm.go:319] 
	I1129 10:20:29.445326  496520 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:20:29.445402  496520 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:20:29.445406  496520 kubeadm.go:319] 
	I1129 10:20:29.445496  496520 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wp50z4.krk3ib1wxfpld591 \
	I1129 10:20:29.445600  496520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:20:29.445620  496520 kubeadm.go:319] 	--control-plane 
	I1129 10:20:29.445624  496520 kubeadm.go:319] 
	I1129 10:20:29.445708  496520 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:20:29.445712  496520 kubeadm.go:319] 
	I1129 10:20:29.445794  496520 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wp50z4.krk3ib1wxfpld591 \
	I1129 10:20:29.445897  496520 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:20:29.449382  496520 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 10:20:29.449623  496520 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:20:29.449735  496520 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:20:29.449758  496520 cni.go:84] Creating CNI manager for ""
	I1129 10:20:29.449770  496520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:20:29.454731  496520 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 10:20:29.457511  496520 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:20:29.462023  496520 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 10:20:29.462047  496520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:20:29.476659  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:20:29.783422  496520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:20:29.783566  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:29.783648  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-708011 minikube.k8s.io/updated_at=2025_11_29T10_20_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=embed-certs-708011 minikube.k8s.io/primary=true
	I1129 10:20:30.005533  496520 ops.go:34] apiserver oom_adj: -16
	I1129 10:20:30.005660  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:30.506352  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:31.006149  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:31.506289  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:32.008569  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:32.506442  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:33.006233  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:33.506142  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:34.006412  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:34.505925  496520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:20:34.638303  496520 kubeadm.go:1114] duration metric: took 4.854783967s to wait for elevateKubeSystemPrivileges
	I1129 10:20:34.638335  496520 kubeadm.go:403] duration metric: took 21.693166814s to StartCluster
	I1129 10:20:34.638352  496520 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:34.638432  496520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:20:34.639797  496520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:20:34.640042  496520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:20:34.640056  496520 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:20:34.640329  496520 config.go:182] Loaded profile config "embed-certs-708011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:20:34.640380  496520 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:20:34.640443  496520 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-708011"
	I1129 10:20:34.640466  496520 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-708011"
	I1129 10:20:34.640492  496520 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:20:34.640955  496520 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:20:34.641573  496520 addons.go:70] Setting default-storageclass=true in profile "embed-certs-708011"
	I1129 10:20:34.641598  496520 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-708011"
	I1129 10:20:34.641895  496520 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:20:34.644380  496520 out.go:179] * Verifying Kubernetes components...
	I1129 10:20:34.647945  496520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:20:34.695924  496520 addons.go:239] Setting addon default-storageclass=true in "embed-certs-708011"
	I1129 10:20:34.695967  496520 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:20:34.696617  496520 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:20:34.696624  496520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:20:34.699986  496520 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:20:34.700010  496520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:20:34.700074  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:34.739555  496520 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:20:34.739576  496520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:20:34.739653  496520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:20:34.752218  496520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:20:34.778771  496520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:20:34.905731  496520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:20:35.041562  496520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:20:35.195718  496520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:20:35.214340  496520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:20:35.412818  496520 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1129 10:20:35.415347  496520 node_ready.go:35] waiting up to 6m0s for node "embed-certs-708011" to be "Ready" ...
	I1129 10:20:35.841241  496520 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1129 10:20:35.844150  496520 addons.go:530] duration metric: took 1.203759273s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1129 10:20:35.916974  496520 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-708011" context rescaled to 1 replicas
	W1129 10:20:37.418774  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:39.919209  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:42.419413  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:44.918802  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:47.418664  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:49.418812  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:51.419329  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:53.918552  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:56.419256  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:20:58.919016  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:21:01.418878  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:21:03.918647  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:21:06.418825  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:21:08.419097  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:21:10.919276  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	W1129 10:21:13.418748  496520 node_ready.go:57] node "embed-certs-708011" has "Ready":"False" status (will retry)
	I1129 10:21:15.918899  496520 node_ready.go:49] node "embed-certs-708011" is "Ready"
	I1129 10:21:15.918936  496520 node_ready.go:38] duration metric: took 40.503561085s for node "embed-certs-708011" to be "Ready" ...
	I1129 10:21:15.918951  496520 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:21:15.919013  496520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:21:15.930998  496520 api_server.go:72] duration metric: took 41.290910031s to wait for apiserver process to appear ...
	I1129 10:21:15.931026  496520 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:21:15.931046  496520 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:21:15.939339  496520 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 10:21:15.940397  496520 api_server.go:141] control plane version: v1.34.1
	I1129 10:21:15.940426  496520 api_server.go:131] duration metric: took 9.390658ms to wait for apiserver health ...
	I1129 10:21:15.940436  496520 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:21:15.944191  496520 system_pods.go:59] 8 kube-system pods found
	I1129 10:21:15.944229  496520 system_pods.go:61] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:21:15.944235  496520 system_pods.go:61] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running
	I1129 10:21:15.944241  496520 system_pods.go:61] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:21:15.944246  496520 system_pods.go:61] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running
	I1129 10:21:15.944251  496520 system_pods.go:61] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running
	I1129 10:21:15.944255  496520 system_pods.go:61] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:21:15.944259  496520 system_pods.go:61] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running
	I1129 10:21:15.944264  496520 system_pods.go:61] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:21:15.944276  496520 system_pods.go:74] duration metric: took 3.834539ms to wait for pod list to return data ...
	I1129 10:21:15.944293  496520 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:21:15.947178  496520 default_sa.go:45] found service account: "default"
	I1129 10:21:15.947205  496520 default_sa.go:55] duration metric: took 2.904632ms for default service account to be created ...
	I1129 10:21:15.947216  496520 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:21:15.949983  496520 system_pods.go:86] 8 kube-system pods found
	I1129 10:21:15.950019  496520 system_pods.go:89] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:21:15.950026  496520 system_pods.go:89] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running
	I1129 10:21:15.950033  496520 system_pods.go:89] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:21:15.950041  496520 system_pods.go:89] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running
	I1129 10:21:15.950045  496520 system_pods.go:89] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running
	I1129 10:21:15.950050  496520 system_pods.go:89] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:21:15.950054  496520 system_pods.go:89] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running
	I1129 10:21:15.950060  496520 system_pods.go:89] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:21:15.950112  496520 retry.go:31] will retry after 218.266124ms: missing components: kube-dns
	I1129 10:21:16.182810  496520 system_pods.go:86] 8 kube-system pods found
	I1129 10:21:16.182846  496520 system_pods.go:89] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:21:16.182854  496520 system_pods.go:89] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running
	I1129 10:21:16.182861  496520 system_pods.go:89] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:21:16.182867  496520 system_pods.go:89] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running
	I1129 10:21:16.182871  496520 system_pods.go:89] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running
	I1129 10:21:16.182876  496520 system_pods.go:89] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:21:16.182884  496520 system_pods.go:89] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running
	I1129 10:21:16.182891  496520 system_pods.go:89] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:21:16.182914  496520 retry.go:31] will retry after 330.755219ms: missing components: kube-dns
	I1129 10:21:16.517979  496520 system_pods.go:86] 8 kube-system pods found
	I1129 10:21:16.518019  496520 system_pods.go:89] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:21:16.518027  496520 system_pods.go:89] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running
	I1129 10:21:16.518033  496520 system_pods.go:89] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:21:16.518038  496520 system_pods.go:89] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running
	I1129 10:21:16.518043  496520 system_pods.go:89] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running
	I1129 10:21:16.518048  496520 system_pods.go:89] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:21:16.518051  496520 system_pods.go:89] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running
	I1129 10:21:16.518058  496520 system_pods.go:89] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:21:16.518104  496520 retry.go:31] will retry after 444.129932ms: missing components: kube-dns
	I1129 10:21:16.965423  496520 system_pods.go:86] 8 kube-system pods found
	I1129 10:21:16.965460  496520 system_pods.go:89] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:21:16.965468  496520 system_pods.go:89] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running
	I1129 10:21:16.965474  496520 system_pods.go:89] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:21:16.965479  496520 system_pods.go:89] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running
	I1129 10:21:16.965484  496520 system_pods.go:89] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running
	I1129 10:21:16.965488  496520 system_pods.go:89] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:21:16.965492  496520 system_pods.go:89] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running
	I1129 10:21:16.965499  496520 system_pods.go:89] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:21:16.965518  496520 retry.go:31] will retry after 529.234919ms: missing components: kube-dns
	I1129 10:21:17.498780  496520 system_pods.go:86] 8 kube-system pods found
	I1129 10:21:17.498815  496520 system_pods.go:89] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Running
	I1129 10:21:17.498823  496520 system_pods.go:89] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running
	I1129 10:21:17.498828  496520 system_pods.go:89] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:21:17.498832  496520 system_pods.go:89] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running
	I1129 10:21:17.498837  496520 system_pods.go:89] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running
	I1129 10:21:17.498841  496520 system_pods.go:89] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:21:17.498845  496520 system_pods.go:89] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running
	I1129 10:21:17.498851  496520 system_pods.go:89] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Running
	I1129 10:21:17.498858  496520 system_pods.go:126] duration metric: took 1.55163673s to wait for k8s-apps to be running ...
	I1129 10:21:17.498866  496520 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:21:17.498926  496520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:21:17.512099  496520 system_svc.go:56] duration metric: took 13.224359ms WaitForService to wait for kubelet
	I1129 10:21:17.512125  496520 kubeadm.go:587] duration metric: took 42.872043442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:21:17.512143  496520 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:21:17.514989  496520 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:21:17.515031  496520 node_conditions.go:123] node cpu capacity is 2
	I1129 10:21:17.515046  496520 node_conditions.go:105] duration metric: took 2.896238ms to run NodePressure ...
	I1129 10:21:17.515059  496520 start.go:242] waiting for startup goroutines ...
	I1129 10:21:17.515066  496520 start.go:247] waiting for cluster config update ...
	I1129 10:21:17.515077  496520 start.go:256] writing updated cluster config ...
	I1129 10:21:17.515357  496520 ssh_runner.go:195] Run: rm -f paused
	I1129 10:21:17.518858  496520 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:21:17.522646  496520 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5frc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:17.527340  496520 pod_ready.go:94] pod "coredns-66bc5c9577-5frc4" is "Ready"
	I1129 10:21:17.527370  496520 pod_ready.go:86] duration metric: took 4.697622ms for pod "coredns-66bc5c9577-5frc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:17.529702  496520 pod_ready.go:83] waiting for pod "etcd-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:17.534576  496520 pod_ready.go:94] pod "etcd-embed-certs-708011" is "Ready"
	I1129 10:21:17.534612  496520 pod_ready.go:86] duration metric: took 4.882313ms for pod "etcd-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:17.536873  496520 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:17.541109  496520 pod_ready.go:94] pod "kube-apiserver-embed-certs-708011" is "Ready"
	I1129 10:21:17.541134  496520 pod_ready.go:86] duration metric: took 4.23552ms for pod "kube-apiserver-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:17.543452  496520 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:17.922947  496520 pod_ready.go:94] pod "kube-controller-manager-embed-certs-708011" is "Ready"
	I1129 10:21:17.922973  496520 pod_ready.go:86] duration metric: took 379.487754ms for pod "kube-controller-manager-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:18.123854  496520 pod_ready.go:83] waiting for pod "kube-proxy-phs6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:18.522766  496520 pod_ready.go:94] pod "kube-proxy-phs6g" is "Ready"
	I1129 10:21:18.522794  496520 pod_ready.go:86] duration metric: took 398.912325ms for pod "kube-proxy-phs6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:18.723192  496520 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:19.123192  496520 pod_ready.go:94] pod "kube-scheduler-embed-certs-708011" is "Ready"
	I1129 10:21:19.123224  496520 pod_ready.go:86] duration metric: took 399.97957ms for pod "kube-scheduler-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:21:19.123238  496520 pod_ready.go:40] duration metric: took 1.60435094s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:21:19.174750  496520 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:21:19.178059  496520 out.go:179] * Done! kubectl is now configured to use "embed-certs-708011" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:21:16 embed-certs-708011 crio[836]: time="2025-11-29T10:21:16.172028266Z" level=info msg="Created container 6e315e0996f23f14a60037b41b9d02c8e2e13fb59da30bb0006bc00a783679e7: kube-system/coredns-66bc5c9577-5frc4/coredns" id=25fd7e6a-b83e-4529-8664-a51f00002078 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:21:16 embed-certs-708011 crio[836]: time="2025-11-29T10:21:16.17263135Z" level=info msg="Starting container: 6e315e0996f23f14a60037b41b9d02c8e2e13fb59da30bb0006bc00a783679e7" id=45554f44-cb13-487b-a1e1-b080f65b11d8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:21:16 embed-certs-708011 crio[836]: time="2025-11-29T10:21:16.179539164Z" level=info msg="Started container" PID=1737 containerID=6e315e0996f23f14a60037b41b9d02c8e2e13fb59da30bb0006bc00a783679e7 description=kube-system/coredns-66bc5c9577-5frc4/coredns id=45554f44-cb13-487b-a1e1-b080f65b11d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2fa17e955ff00d07e31d019ae85fb7ca883afbb6737ac0e3a8ad4b363aa49e86
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.677669169Z" level=info msg="Running pod sandbox: default/busybox/POD" id=214f814e-6bbe-4470-bc2b-dcbe4a26826f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.677739529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.691080829Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6a808ddaaae150007f3ce815a32ee4fc97d985399673b59050c1584759749199 UID:75efd665-57a2-4237-baf4-78e41ceda948 NetNS:/var/run/netns/dfc07dcd-8ea4-467a-9a2f-c20f781306a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000781d8}] Aliases:map[]}"
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.691263797Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.699931664Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6a808ddaaae150007f3ce815a32ee4fc97d985399673b59050c1584759749199 UID:75efd665-57a2-4237-baf4-78e41ceda948 NetNS:/var/run/netns/dfc07dcd-8ea4-467a-9a2f-c20f781306a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000781d8}] Aliases:map[]}"
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.700081631Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.706439326Z" level=info msg="Ran pod sandbox 6a808ddaaae150007f3ce815a32ee4fc97d985399673b59050c1584759749199 with infra container: default/busybox/POD" id=214f814e-6bbe-4470-bc2b-dcbe4a26826f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.708848281Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a59c85f4-a39c-485c-a556-1b924ad2d3a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.709081137Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a59c85f4-a39c-485c-a556-1b924ad2d3a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.709183899Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a59c85f4-a39c-485c-a556-1b924ad2d3a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.710233528Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=846c24aa-d1f9-4e78-95eb-8339f752b83b name=/runtime.v1.ImageService/PullImage
	Nov 29 10:21:19 embed-certs-708011 crio[836]: time="2025-11-29T10:21:19.713085377Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.661660157Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=846c24aa-d1f9-4e78-95eb-8339f752b83b name=/runtime.v1.ImageService/PullImage
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.662607639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c4eb044a-9d47-43eb-b064-2ff87e064d4c name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.665711568Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f77e70c-8a81-4c7a-b787-e15aaa5c88af name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.671350215Z" level=info msg="Creating container: default/busybox/busybox" id=672a5f85-b711-4d3a-a04e-d4025198ff39 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.671504112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.676304512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.676783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.69176276Z" level=info msg="Created container 0ea0d9f1edb6975fd6a8060abe0bb623892fc2f6049d37a4ee05ec31069d0264: default/busybox/busybox" id=672a5f85-b711-4d3a-a04e-d4025198ff39 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.692969371Z" level=info msg="Starting container: 0ea0d9f1edb6975fd6a8060abe0bb623892fc2f6049d37a4ee05ec31069d0264" id=1b20d221-7cef-4467-9c44-8cfa10520bd2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:21:21 embed-certs-708011 crio[836]: time="2025-11-29T10:21:21.694500942Z" level=info msg="Started container" PID=1787 containerID=0ea0d9f1edb6975fd6a8060abe0bb623892fc2f6049d37a4ee05ec31069d0264 description=default/busybox/busybox id=1b20d221-7cef-4467-9c44-8cfa10520bd2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a808ddaaae150007f3ce815a32ee4fc97d985399673b59050c1584759749199
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	0ea0d9f1edb69       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   6a808ddaaae15       busybox                                      default
	6e315e0996f23       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   2fa17e955ff00       coredns-66bc5c9577-5frc4                     kube-system
	f7a1c19bd49b7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   1f9fce1c0cfbb       storage-provisioner                          kube-system
	09376e616dccd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   723ab555bbb1a       kube-proxy-phs6g                             kube-system
	495c1b284e12d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   18af05c6b243c       kindnet-wfvvz                                kube-system
	4d52252c76de8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   f5c5814dafe21       etcd-embed-certs-708011                      kube-system
	fcc388874a714       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   9e4bb31d45883       kube-apiserver-embed-certs-708011            kube-system
	aa5ed8d488c3e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   354c9da59c31a       kube-controller-manager-embed-certs-708011   kube-system
	556fdf0216f68       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   feabe0bafa773       kube-scheduler-embed-certs-708011            kube-system
	
	
	==> coredns [6e315e0996f23f14a60037b41b9d02c8e2e13fb59da30bb0006bc00a783679e7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35402 - 28416 "HINFO IN 3385805368535305746.4794824317078184716. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01362571s
	
	
	==> describe nodes <==
	Name:               embed-certs-708011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-708011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-708011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_20_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:20:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-708011
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:21:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:21:15 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:21:15 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:21:15 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:21:15 +0000   Sat, 29 Nov 2025 10:21:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-708011
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1f99ece5-e15d-4bbe-acc3-9db5d863dc89
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-5frc4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-708011                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-wfvvz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-708011             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-708011    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-phs6g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-708011             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-708011 event: Registered Node embed-certs-708011 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-708011 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 09:51] overlayfs: idmapped layers are currently not supported
	[Nov29 09:52] overlayfs: idmapped layers are currently not supported
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4d52252c76de89a36faf74f1754d3dcc3173e7c9849afcc3213a231dbaf41130] <==
	{"level":"warn","ts":"2025-11-29T10:20:24.815346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:24.838753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:24.863431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:24.882820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:24.922360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:24.925228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:24.944429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:24.981997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.002595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.029383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.051966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.086146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.106112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.126322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.140473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.165769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.190499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.199302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.219520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.235091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.253422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.273525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.300287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.318034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:20:25.414022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39022","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:30 up  3:03,  0 user,  load average: 2.38, 2.48, 2.27
	Linux embed-certs-708011 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [495c1b284e12d71be2a9d8ec872dbce397017e78a12eba109b12fbf726c96605] <==
	I1129 10:20:34.920054       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:20:34.920319       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:20:34.920441       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:20:34.920452       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:20:34.920462       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:20:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:20:35.217896       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:20:35.217918       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:20:35.217927       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:20:35.218865       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:21:05.218240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 10:21:05.218243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:21:05.218340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:21:05.219660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1129 10:21:06.518253       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:21:06.518288       1 metrics.go:72] Registering metrics
	I1129 10:21:06.518363       1 controller.go:711] "Syncing nftables rules"
	I1129 10:21:15.223191       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:21:15.223250       1 main.go:301] handling current node
	I1129 10:21:25.218167       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:21:25.218252       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fcc388874a714c28123a1efe6ced1feb9cd9277ffdb7494c26058a4c3b357de9] <==
	E1129 10:20:26.351691       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1129 10:20:26.351774       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1129 10:20:26.374174       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:20:26.381016       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:20:26.381319       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 10:20:26.400310       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:20:26.404580       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:20:26.559650       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:20:27.041907       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 10:20:27.048397       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 10:20:27.048486       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:20:27.756590       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:20:27.807258       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:20:27.926937       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 10:20:27.937682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 10:20:27.938974       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:20:27.946405       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:20:28.309758       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:20:28.863870       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:20:28.882862       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 10:20:28.900799       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 10:20:34.069184       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:20:34.077801       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:20:34.263518       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:20:34.314528       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [aa5ed8d488c3e4ccd44d357489463b35cdb588354efdc20eef4b15a6b7bfe395] <==
	I1129 10:20:33.356867       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:20:33.356876       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:20:33.357025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:20:33.357072       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:20:33.357087       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:20:33.359311       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 10:20:33.359359       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 10:20:33.359723       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:20:33.359384       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 10:20:33.359395       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:20:33.359404       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 10:20:33.359372       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:20:33.362931       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:20:33.363060       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-708011"
	I1129 10:20:33.363147       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 10:20:33.364822       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:20:33.359430       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:20:33.365436       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 10:20:33.365523       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 10:20:33.365584       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 10:20:33.365617       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 10:20:33.365661       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 10:20:33.373544       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:20:33.378340       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-708011" podCIDRs=["10.244.0.0/24"]
	I1129 10:21:18.368704       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [09376e616dccdd0c06f07156e7f8b1d8d06dadf4b04c56887a9fc5b45637ec66] <==
	I1129 10:20:35.053691       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:20:35.144290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:20:35.261020       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:20:35.261063       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 10:20:35.261146       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:20:35.345109       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:20:35.348951       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:20:35.354641       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:20:35.354971       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:20:35.354984       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:20:35.356175       1 config.go:200] "Starting service config controller"
	I1129 10:20:35.356190       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:20:35.365254       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:20:35.365274       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:20:35.365298       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:20:35.365302       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:20:35.366250       1 config.go:309] "Starting node config controller"
	I1129 10:20:35.366261       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:20:35.366268       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:20:35.459272       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:20:35.465969       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:20:35.466017       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [556fdf0216f688232bb7764e97c97ade62d3f36e5951be452e12768b0ac42f6a] <==
	I1129 10:20:26.860321       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:20:26.863051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:20:26.863156       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:20:26.863425       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:20:26.865415       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 10:20:26.880813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 10:20:26.881532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:20:26.881597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:20:26.881723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 10:20:26.884811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 10:20:26.884510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:20:26.884554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 10:20:26.884591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:20:26.884604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:20:26.884404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:20:26.885218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:20:26.885219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:20:26.885282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 10:20:26.885330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:20:26.885420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:20:26.885404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 10:20:26.885474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:20:26.885574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 10:20:26.885583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1129 10:20:28.064931       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:20:30 embed-certs-708011 kubelet[1295]: I1129 10:20:30.035559    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-708011" podStartSLOduration=1.035523758 podStartE2EDuration="1.035523758s" podCreationTimestamp="2025-11-29 10:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:20:30.002925398 +0000 UTC m=+1.310164780" watchObservedRunningTime="2025-11-29 10:20:30.035523758 +0000 UTC m=+1.342763131"
	Nov 29 10:20:30 embed-certs-708011 kubelet[1295]: I1129 10:20:30.099658    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-708011" podStartSLOduration=3.099609834 podStartE2EDuration="3.099609834s" podCreationTimestamp="2025-11-29 10:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:20:30.037512827 +0000 UTC m=+1.344752200" watchObservedRunningTime="2025-11-29 10:20:30.099609834 +0000 UTC m=+1.406849289"
	Nov 29 10:20:33 embed-certs-708011 kubelet[1295]: I1129 10:20:33.380068    1295 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 10:20:33 embed-certs-708011 kubelet[1295]: I1129 10:20:33.380601    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463487    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5kg7\" (UniqueName: \"kubernetes.io/projected/84396f86-dd6d-48d7-9b5b-49ebf273f71b-kube-api-access-v5kg7\") pod \"kube-proxy-phs6g\" (UID: \"84396f86-dd6d-48d7-9b5b-49ebf273f71b\") " pod="kube-system/kube-proxy-phs6g"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463549    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff138410-a3cc-4e8e-a66c-dcbcf88b738c-lib-modules\") pod \"kindnet-wfvvz\" (UID: \"ff138410-a3cc-4e8e-a66c-dcbcf88b738c\") " pod="kube-system/kindnet-wfvvz"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463571    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84396f86-dd6d-48d7-9b5b-49ebf273f71b-xtables-lock\") pod \"kube-proxy-phs6g\" (UID: \"84396f86-dd6d-48d7-9b5b-49ebf273f71b\") " pod="kube-system/kube-proxy-phs6g"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463593    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84396f86-dd6d-48d7-9b5b-49ebf273f71b-lib-modules\") pod \"kube-proxy-phs6g\" (UID: \"84396f86-dd6d-48d7-9b5b-49ebf273f71b\") " pod="kube-system/kube-proxy-phs6g"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463614    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff138410-a3cc-4e8e-a66c-dcbcf88b738c-xtables-lock\") pod \"kindnet-wfvvz\" (UID: \"ff138410-a3cc-4e8e-a66c-dcbcf88b738c\") " pod="kube-system/kindnet-wfvvz"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463634    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ff138410-a3cc-4e8e-a66c-dcbcf88b738c-cni-cfg\") pod \"kindnet-wfvvz\" (UID: \"ff138410-a3cc-4e8e-a66c-dcbcf88b738c\") " pod="kube-system/kindnet-wfvvz"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463651    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqgw8\" (UniqueName: \"kubernetes.io/projected/ff138410-a3cc-4e8e-a66c-dcbcf88b738c-kube-api-access-tqgw8\") pod \"kindnet-wfvvz\" (UID: \"ff138410-a3cc-4e8e-a66c-dcbcf88b738c\") " pod="kube-system/kindnet-wfvvz"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.463669    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84396f86-dd6d-48d7-9b5b-49ebf273f71b-kube-proxy\") pod \"kube-proxy-phs6g\" (UID: \"84396f86-dd6d-48d7-9b5b-49ebf273f71b\") " pod="kube-system/kube-proxy-phs6g"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.588213    1295 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: W1129 10:20:34.724227    1295 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/crio-723ab555bbb1a904f6424560bf45f4f2a357ed6cadb1ef703b0e2c48f9991298 WatchSource:0}: Error finding container 723ab555bbb1a904f6424560bf45f4f2a357ed6cadb1ef703b0e2c48f9991298: Status 404 returned error can't find the container with id 723ab555bbb1a904f6424560bf45f4f2a357ed6cadb1ef703b0e2c48f9991298
	Nov 29 10:20:34 embed-certs-708011 kubelet[1295]: I1129 10:20:34.991921    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wfvvz" podStartSLOduration=0.99190061 podStartE2EDuration="991.90061ms" podCreationTimestamp="2025-11-29 10:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:20:34.955469927 +0000 UTC m=+6.262709291" watchObservedRunningTime="2025-11-29 10:20:34.99190061 +0000 UTC m=+6.299139983"
	Nov 29 10:20:38 embed-certs-708011 kubelet[1295]: I1129 10:20:38.061838    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-phs6g" podStartSLOduration=4.061814165 podStartE2EDuration="4.061814165s" podCreationTimestamp="2025-11-29 10:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:20:34.992264503 +0000 UTC m=+6.299503876" watchObservedRunningTime="2025-11-29 10:20:38.061814165 +0000 UTC m=+9.369053530"
	Nov 29 10:21:15 embed-certs-708011 kubelet[1295]: I1129 10:21:15.736016    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 10:21:15 embed-certs-708011 kubelet[1295]: I1129 10:21:15.884236    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvljk\" (UniqueName: \"kubernetes.io/projected/708179d5-3a6c-457c-8c3a-32e60b0ec8d4-kube-api-access-mvljk\") pod \"coredns-66bc5c9577-5frc4\" (UID: \"708179d5-3a6c-457c-8c3a-32e60b0ec8d4\") " pod="kube-system/coredns-66bc5c9577-5frc4"
	Nov 29 10:21:15 embed-certs-708011 kubelet[1295]: I1129 10:21:15.884299    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ca33c340-0e42-4780-bf32-d1e48f79705f-tmp\") pod \"storage-provisioner\" (UID: \"ca33c340-0e42-4780-bf32-d1e48f79705f\") " pod="kube-system/storage-provisioner"
	Nov 29 10:21:15 embed-certs-708011 kubelet[1295]: I1129 10:21:15.884319    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2fg\" (UniqueName: \"kubernetes.io/projected/ca33c340-0e42-4780-bf32-d1e48f79705f-kube-api-access-md2fg\") pod \"storage-provisioner\" (UID: \"ca33c340-0e42-4780-bf32-d1e48f79705f\") " pod="kube-system/storage-provisioner"
	Nov 29 10:21:15 embed-certs-708011 kubelet[1295]: I1129 10:21:15.884344    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/708179d5-3a6c-457c-8c3a-32e60b0ec8d4-config-volume\") pod \"coredns-66bc5c9577-5frc4\" (UID: \"708179d5-3a6c-457c-8c3a-32e60b0ec8d4\") " pod="kube-system/coredns-66bc5c9577-5frc4"
	Nov 29 10:21:17 embed-certs-708011 kubelet[1295]: I1129 10:21:17.083905    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.083888065 podStartE2EDuration="42.083888065s" podCreationTimestamp="2025-11-29 10:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:21:17.083767776 +0000 UTC m=+48.391007174" watchObservedRunningTime="2025-11-29 10:21:17.083888065 +0000 UTC m=+48.391127438"
	Nov 29 10:21:19 embed-certs-708011 kubelet[1295]: I1129 10:21:19.367751    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5frc4" podStartSLOduration=45.367731788 podStartE2EDuration="45.367731788s" podCreationTimestamp="2025-11-29 10:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:21:17.097763648 +0000 UTC m=+48.405003021" watchObservedRunningTime="2025-11-29 10:21:19.367731788 +0000 UTC m=+50.674971153"
	Nov 29 10:21:19 embed-certs-708011 kubelet[1295]: I1129 10:21:19.508519    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmrb\" (UniqueName: \"kubernetes.io/projected/75efd665-57a2-4237-baf4-78e41ceda948-kube-api-access-cxmrb\") pod \"busybox\" (UID: \"75efd665-57a2-4237-baf4-78e41ceda948\") " pod="default/busybox"
	Nov 29 10:21:22 embed-certs-708011 kubelet[1295]: I1129 10:21:22.095818    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.141781247 podStartE2EDuration="3.095797305s" podCreationTimestamp="2025-11-29 10:21:19 +0000 UTC" firstStartedPulling="2025-11-29 10:21:19.709527821 +0000 UTC m=+51.016767186" lastFinishedPulling="2025-11-29 10:21:21.66354388 +0000 UTC m=+52.970783244" observedRunningTime="2025-11-29 10:21:22.094965213 +0000 UTC m=+53.402204586" watchObservedRunningTime="2025-11-29 10:21:22.095797305 +0000 UTC m=+53.403036669"
	
	
	==> storage-provisioner [f7a1c19bd49b78a6f3ff0ab817812ef31d724009a63f1c8d938b23380d02ee4c] <==
	I1129 10:21:16.161133       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:21:16.187951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:21:16.188077       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:21:16.194933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:16.205432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:21:16.206499       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:21:16.209151       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-708011_e2b2ed49-67ea-47bc-989e-756846f06c1d!
	I1129 10:21:16.215579       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8f51354-af5a-4f25-a98a-e2bfddbbd579", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-708011_e2b2ed49-67ea-47bc-989e-756846f06c1d became leader
	W1129 10:21:16.223432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:16.239852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:21:16.309899       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-708011_e2b2ed49-67ea-47bc-989e-756846f06c1d!
	W1129 10:21:18.243277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:18.247702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:20.251164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:20.258193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:22.261473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:22.266063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:24.268896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:24.275688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:26.279254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:26.283724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:28.287151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:28.293879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:30.297585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:21:30.303316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-708011 -n embed-certs-708011
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-708011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-708011 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-708011 --alsologtostderr -v=1: exit status 80 (2.595728245s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-708011 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 10:22:52.170883  506443 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:22:52.171399  506443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:22:52.171415  506443 out.go:374] Setting ErrFile to fd 2...
	I1129 10:22:52.171421  506443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:22:52.172308  506443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:22:52.172815  506443 out.go:368] Setting JSON to false
	I1129 10:22:52.172904  506443 mustload.go:66] Loading cluster: embed-certs-708011
	I1129 10:22:52.173660  506443 config.go:182] Loaded profile config "embed-certs-708011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:22:52.174591  506443 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:22:52.198125  506443 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:22:52.198601  506443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:22:52.282988  506443 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:22:52.272730724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:22:52.283672  506443 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-708011 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 10:22:52.287248  506443 out.go:179] * Pausing node embed-certs-708011 ... 
	I1129 10:22:52.290240  506443 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:22:52.290695  506443 ssh_runner.go:195] Run: systemctl --version
	I1129 10:22:52.290765  506443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:22:52.321809  506443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:22:52.429187  506443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:52.443685  506443 pause.go:52] kubelet running: true
	I1129 10:22:52.443759  506443 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:22:52.689436  506443 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:22:52.689520  506443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:22:52.762981  506443 cri.go:89] found id: "52238fd1860d97914086f8137b7f7e753619496f7c2af91747d0ec80d787605a"
	I1129 10:22:52.763055  506443 cri.go:89] found id: "015314d00d4b2a6f4bfa3c12ab4ce66f4a0c69af043fdc307314c31e97739e05"
	I1129 10:22:52.763076  506443 cri.go:89] found id: "826e09f43d9eac658b0cdc43a8652be4cf6343ebad98975ea0ab65ac30ac2604"
	I1129 10:22:52.763093  506443 cri.go:89] found id: "b95113cddf48914a845a48ab3e34e7b56e9e981136414866952a96b8bd38b29c"
	I1129 10:22:52.763121  506443 cri.go:89] found id: "e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06"
	I1129 10:22:52.763145  506443 cri.go:89] found id: "727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe"
	I1129 10:22:52.763167  506443 cri.go:89] found id: "adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457"
	I1129 10:22:52.763194  506443 cri.go:89] found id: "2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7"
	I1129 10:22:52.763224  506443 cri.go:89] found id: "2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d"
	I1129 10:22:52.763247  506443 cri.go:89] found id: "05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025"
	I1129 10:22:52.763263  506443 cri.go:89] found id: "4c2d7f74c7191244238581233d9fc0da4fd49058d128ed3fc102d7709d1e9f02"
	I1129 10:22:52.763279  506443 cri.go:89] found id: ""
	I1129 10:22:52.763365  506443 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:22:52.784821  506443 retry.go:31] will retry after 141.763196ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:22:52Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:22:52.927143  506443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:52.940591  506443 pause.go:52] kubelet running: false
	I1129 10:22:52.940655  506443 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:22:53.123983  506443 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:22:53.124064  506443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:22:53.200489  506443 cri.go:89] found id: "52238fd1860d97914086f8137b7f7e753619496f7c2af91747d0ec80d787605a"
	I1129 10:22:53.200516  506443 cri.go:89] found id: "015314d00d4b2a6f4bfa3c12ab4ce66f4a0c69af043fdc307314c31e97739e05"
	I1129 10:22:53.200522  506443 cri.go:89] found id: "826e09f43d9eac658b0cdc43a8652be4cf6343ebad98975ea0ab65ac30ac2604"
	I1129 10:22:53.200526  506443 cri.go:89] found id: "b95113cddf48914a845a48ab3e34e7b56e9e981136414866952a96b8bd38b29c"
	I1129 10:22:53.200529  506443 cri.go:89] found id: "e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06"
	I1129 10:22:53.200533  506443 cri.go:89] found id: "727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe"
	I1129 10:22:53.200536  506443 cri.go:89] found id: "adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457"
	I1129 10:22:53.200539  506443 cri.go:89] found id: "2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7"
	I1129 10:22:53.200542  506443 cri.go:89] found id: "2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d"
	I1129 10:22:53.200548  506443 cri.go:89] found id: "05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025"
	I1129 10:22:53.200552  506443 cri.go:89] found id: "4c2d7f74c7191244238581233d9fc0da4fd49058d128ed3fc102d7709d1e9f02"
	I1129 10:22:53.200555  506443 cri.go:89] found id: ""
	I1129 10:22:53.200622  506443 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:22:53.212157  506443 retry.go:31] will retry after 379.881691ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:22:53Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:22:53.592843  506443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:53.606804  506443 pause.go:52] kubelet running: false
	I1129 10:22:53.606926  506443 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:22:53.780925  506443 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:22:53.781038  506443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:22:53.850351  506443 cri.go:89] found id: "52238fd1860d97914086f8137b7f7e753619496f7c2af91747d0ec80d787605a"
	I1129 10:22:53.850380  506443 cri.go:89] found id: "015314d00d4b2a6f4bfa3c12ab4ce66f4a0c69af043fdc307314c31e97739e05"
	I1129 10:22:53.850385  506443 cri.go:89] found id: "826e09f43d9eac658b0cdc43a8652be4cf6343ebad98975ea0ab65ac30ac2604"
	I1129 10:22:53.850389  506443 cri.go:89] found id: "b95113cddf48914a845a48ab3e34e7b56e9e981136414866952a96b8bd38b29c"
	I1129 10:22:53.850392  506443 cri.go:89] found id: "e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06"
	I1129 10:22:53.850395  506443 cri.go:89] found id: "727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe"
	I1129 10:22:53.850398  506443 cri.go:89] found id: "adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457"
	I1129 10:22:53.850401  506443 cri.go:89] found id: "2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7"
	I1129 10:22:53.850430  506443 cri.go:89] found id: "2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d"
	I1129 10:22:53.850443  506443 cri.go:89] found id: "05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025"
	I1129 10:22:53.850446  506443 cri.go:89] found id: "4c2d7f74c7191244238581233d9fc0da4fd49058d128ed3fc102d7709d1e9f02"
	I1129 10:22:53.850449  506443 cri.go:89] found id: ""
	I1129 10:22:53.850522  506443 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:22:53.862971  506443 retry.go:31] will retry after 547.856904ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:22:53Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:22:54.411285  506443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:54.426142  506443 pause.go:52] kubelet running: false
	I1129 10:22:54.426224  506443 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:22:54.603126  506443 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:22:54.603202  506443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:22:54.670203  506443 cri.go:89] found id: "52238fd1860d97914086f8137b7f7e753619496f7c2af91747d0ec80d787605a"
	I1129 10:22:54.670233  506443 cri.go:89] found id: "015314d00d4b2a6f4bfa3c12ab4ce66f4a0c69af043fdc307314c31e97739e05"
	I1129 10:22:54.670239  506443 cri.go:89] found id: "826e09f43d9eac658b0cdc43a8652be4cf6343ebad98975ea0ab65ac30ac2604"
	I1129 10:22:54.670256  506443 cri.go:89] found id: "b95113cddf48914a845a48ab3e34e7b56e9e981136414866952a96b8bd38b29c"
	I1129 10:22:54.670261  506443 cri.go:89] found id: "e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06"
	I1129 10:22:54.670265  506443 cri.go:89] found id: "727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe"
	I1129 10:22:54.670272  506443 cri.go:89] found id: "adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457"
	I1129 10:22:54.670308  506443 cri.go:89] found id: "2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7"
	I1129 10:22:54.670318  506443 cri.go:89] found id: "2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d"
	I1129 10:22:54.670325  506443 cri.go:89] found id: "05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025"
	I1129 10:22:54.670329  506443 cri.go:89] found id: "4c2d7f74c7191244238581233d9fc0da4fd49058d128ed3fc102d7709d1e9f02"
	I1129 10:22:54.670332  506443 cri.go:89] found id: ""
	I1129 10:22:54.670392  506443 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:22:54.687194  506443 out.go:203] 
	W1129 10:22:54.690122  506443 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:22:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:22:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 10:22:54.690198  506443 out.go:285] * 
	* 
	W1129 10:22:54.697245  506443 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 10:22:54.700161  506443 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-708011 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-708011
helpers_test.go:243: (dbg) docker inspect embed-certs-708011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a",
	        "Created": "2025-11-29T10:20:04.082616861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:21:43.917067482Z",
	            "FinishedAt": "2025-11-29T10:21:42.689938476Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/hosts",
	        "LogPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a-json.log",
	        "Name": "/embed-certs-708011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-708011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-708011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a",
	                "LowerDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-708011",
	                "Source": "/var/lib/docker/volumes/embed-certs-708011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-708011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-708011",
	                "name.minikube.sigs.k8s.io": "embed-certs-708011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e4e911abcf81853b1352e4308648328f71985deafffaac611ad99d8d699eea4",
	            "SandboxKey": "/var/run/docker/netns/0e4e911abcf8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-708011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:9f:09:26:3b:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71caade6f8e792ec8d9dce1f07288f08e50f74b2f8fdf0dbf488e545467ec977",
	                    "EndpointID": "0fef8f051280f7c71e0c43243e434592bfc22f3dfa66a19ef8559242ffe617cb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-708011",
	                        "f6641e3603d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011: exit status 2 (358.900568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-708011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-708011 logs -n 25: (1.48355715s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p running-upgrade-493711                                                                                                                                                                                                                     │ running-upgrade-493711       │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-033056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-033056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-033056                                                                                                                                                                                                                        │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-685516 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-685516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:19 UTC │
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                                                                                               │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ stop    │ -p embed-certs-708011 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ delete  │ -p cert-expiration-930117                                                                                                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-259491                                                                                                                                                                                                               │ disable-driver-mounts-259491 │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                                                                                                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:21:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:21:49.265071  501976 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:21:49.265224  501976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:21:49.265231  501976 out.go:374] Setting ErrFile to fd 2...
	I1129 10:21:49.265236  501976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:21:49.265489  501976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:21:49.265890  501976 out.go:368] Setting JSON to false
	I1129 10:21:49.266854  501976 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11059,"bootTime":1764400651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:21:49.266920  501976 start.go:143] virtualization:  
	I1129 10:21:49.270881  501976 out.go:179] * [no-preload-949993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:21:49.274294  501976 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:21:49.274356  501976 notify.go:221] Checking for updates...
	I1129 10:21:49.280580  501976 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:21:49.283618  501976 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:21:49.286549  501976 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:21:49.289477  501976 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:21:49.292650  501976 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:21:49.296285  501976 config.go:182] Loaded profile config "embed-certs-708011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:21:49.296445  501976 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:21:49.334262  501976 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:21:49.334394  501976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:21:49.441198  501976 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:21:49.431554622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:21:49.441295  501976 docker.go:319] overlay module found
	I1129 10:21:49.444546  501976 out.go:179] * Using the docker driver based on user configuration
	I1129 10:21:49.447474  501976 start.go:309] selected driver: docker
	I1129 10:21:49.447505  501976 start.go:927] validating driver "docker" against <nil>
	I1129 10:21:49.447520  501976 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:21:49.448233  501976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:21:49.559710  501976 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:21:49.545511622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:21:49.559861  501976 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 10:21:49.560082  501976 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:21:49.562991  501976 out.go:179] * Using Docker driver with root privileges
	I1129 10:21:49.565804  501976 cni.go:84] Creating CNI manager for ""
	I1129 10:21:49.565876  501976 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:21:49.565885  501976 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 10:21:49.565959  501976 start.go:353] cluster config:
	{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:21:49.569093  501976 out.go:179] * Starting "no-preload-949993" primary control-plane node in "no-preload-949993" cluster
	I1129 10:21:49.571878  501976 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:21:49.574795  501976 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:21:49.577556  501976 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:21:49.577677  501976 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:21:49.577710  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json: {Name:mk358b56b7fe514be101ec22fbf5f7b1feeb0ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:49.577896  501976 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:21:49.578242  501976 cache.go:107] acquiring lock: {Name:mk7e036f21c3fa53998769ec8ca8e9d0cc43797a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578314  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 10:21:49.578322  501976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.406µs
	I1129 10:21:49.578334  501976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 10:21:49.578345  501976 cache.go:107] acquiring lock: {Name:mk55e5c5c1d216b13668659dfb1a1298483fe357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578376  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 10:21:49.578382  501976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.277µs
	I1129 10:21:49.578388  501976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 10:21:49.578397  501976 cache.go:107] acquiring lock: {Name:mk79de74aa677651359631e14e64f02dbae72c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578429  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 10:21:49.578434  501976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 39.122µs
	I1129 10:21:49.578440  501976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 10:21:49.578449  501976 cache.go:107] acquiring lock: {Name:mk3420fbe5609e73633731fff1b3352eed3a8d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578478  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 10:21:49.578483  501976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 34.544µs
	I1129 10:21:49.578488  501976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 10:21:49.578507  501976 cache.go:107] acquiring lock: {Name:mkec0dc08372453f12658d7249505bdb38e0468a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578534  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 10:21:49.578539  501976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 33.28µs
	I1129 10:21:49.578544  501976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 10:21:49.578553  501976 cache.go:107] acquiring lock: {Name:mkb12ce0a127601415f42976e337ea76e82915af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578578  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1129 10:21:49.578582  501976 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.679µs
	I1129 10:21:49.578587  501976 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 10:21:49.578601  501976 cache.go:107] acquiring lock: {Name:mkc2341e09a949f9273b1d33b0a3b4021526fa7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578626  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 10:21:49.578630  501976 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 35.906µs
	I1129 10:21:49.578636  501976 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 10:21:49.578644  501976 cache.go:107] acquiring lock: {Name:mk0167a0bfcd689b945be8d473d2efef87ce9fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578669  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 10:21:49.578673  501976 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.237µs
	I1129 10:21:49.578678  501976 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 10:21:49.578684  501976 cache.go:87] Successfully saved all images to host disk.
	I1129 10:21:49.599024  501976 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:21:49.599043  501976 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:21:49.599057  501976 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:21:49.599087  501976 start.go:360] acquireMachinesLock for no-preload-949993: {Name:mk6ff94a11813e006c209466e9cbb5aadf7ae1bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.599183  501976 start.go:364] duration metric: took 80.222µs to acquireMachinesLock for "no-preload-949993"
	I1129 10:21:49.599210  501976 start.go:93] Provisioning new machine with config: &{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:21:49.599275  501976 start.go:125] createHost starting for "" (driver="docker")
	I1129 10:21:48.685952  500704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:21:48.685972  500704 machine.go:97] duration metric: took 4.38105876s to provisionDockerMachine
	I1129 10:21:48.685984  500704 start.go:293] postStartSetup for "embed-certs-708011" (driver="docker")
	I1129 10:21:48.685995  500704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:21:48.686070  500704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:21:48.686135  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:48.717516  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:48.854265  500704 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:21:48.857813  500704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:21:48.857841  500704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:21:48.857852  500704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:21:48.857912  500704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:21:48.858000  500704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:21:48.858148  500704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:21:48.866324  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:21:48.886427  500704 start.go:296] duration metric: took 200.427369ms for postStartSetup
	I1129 10:21:48.886527  500704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:21:48.886578  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:48.914159  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.023500  500704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:21:49.028855  500704 fix.go:56] duration metric: took 5.197571787s for fixHost
	I1129 10:21:49.028883  500704 start.go:83] releasing machines lock for "embed-certs-708011", held for 5.197619959s
	I1129 10:21:49.028963  500704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-708011
	I1129 10:21:49.046939  500704 ssh_runner.go:195] Run: cat /version.json
	I1129 10:21:49.046985  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:49.047021  500704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:21:49.047072  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:49.073571  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.077827  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.328539  500704 ssh_runner.go:195] Run: systemctl --version
	I1129 10:21:49.335502  500704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:21:49.379038  500704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:21:49.384009  500704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:21:49.384084  500704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:21:49.393348  500704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:21:49.393374  500704 start.go:496] detecting cgroup driver to use...
	I1129 10:21:49.393406  500704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:21:49.393463  500704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:21:49.410534  500704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:21:49.431149  500704 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:21:49.431214  500704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:21:49.453131  500704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:21:49.473390  500704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:21:49.627380  500704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:21:49.794820  500704 docker.go:234] disabling docker service ...
	I1129 10:21:49.794949  500704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:21:49.811462  500704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:21:49.828309  500704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:21:49.990148  500704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:21:50.180945  500704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:21:50.196830  500704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:21:50.230144  500704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:21:50.230232  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.250536  500704 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:21:50.250611  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.264034  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.276285  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.290857  500704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:21:50.304652  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.316890  500704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.331852  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.358542  500704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:21:50.379460  500704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:21:50.388513  500704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:50.542496  500704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:21:50.765924  500704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:21:50.766007  500704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:21:50.770894  500704 start.go:564] Will wait 60s for crictl version
	I1129 10:21:50.770967  500704 ssh_runner.go:195] Run: which crictl
	I1129 10:21:50.775452  500704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:21:50.811556  500704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:21:50.811689  500704 ssh_runner.go:195] Run: crio --version
	I1129 10:21:50.842901  500704 ssh_runner.go:195] Run: crio --version
	I1129 10:21:50.898181  500704 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:21:50.901392  500704 cli_runner.go:164] Run: docker network inspect embed-certs-708011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:21:50.936721  500704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 10:21:50.941226  500704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:21:50.961655  500704 kubeadm.go:884] updating cluster {Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:21:50.961795  500704 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:21:50.961942  500704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:21:51.031859  500704 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:21:51.031882  500704 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:21:51.031940  500704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:21:51.085813  500704 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:21:51.085838  500704 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:21:51.085845  500704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 10:21:51.085955  500704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-708011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:21:51.086031  500704 ssh_runner.go:195] Run: crio config
	I1129 10:21:51.177424  500704 cni.go:84] Creating CNI manager for ""
	I1129 10:21:51.177449  500704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:21:51.177504  500704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:21:51.177539  500704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-708011 NodeName:embed-certs-708011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:21:51.177735  500704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-708011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:21:51.177823  500704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:21:51.191666  500704 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:21:51.191747  500704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:21:51.202843  500704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1129 10:21:51.224895  500704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:21:51.251533  500704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1129 10:21:51.274654  500704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:21:51.279866  500704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:21:51.293203  500704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:51.504346  500704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:21:51.542481  500704 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011 for IP: 192.168.85.2
	I1129 10:21:51.542500  500704 certs.go:195] generating shared ca certs ...
	I1129 10:21:51.542515  500704 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:51.542664  500704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:21:51.542702  500704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:21:51.542708  500704 certs.go:257] generating profile certs ...
	I1129 10:21:51.542795  500704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.key
	I1129 10:21:51.542861  500704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key.704f8259
	I1129 10:21:51.542909  500704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key
	I1129 10:21:51.543026  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:21:51.543054  500704 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:21:51.543061  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:21:51.543086  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:21:51.543111  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:21:51.543139  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:21:51.543181  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:21:51.543746  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:21:51.591663  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:21:51.630446  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:21:51.671101  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:21:51.711903  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 10:21:51.761012  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 10:21:51.900958  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:21:51.984906  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 10:21:52.031290  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:21:52.052977  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:21:52.078132  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:21:52.111633  500704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:21:52.126807  500704 ssh_runner.go:195] Run: openssl version
	I1129 10:21:52.133630  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:21:52.142998  500704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:21:52.149549  500704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:21:52.149629  500704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:21:52.198511  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:21:52.207512  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:21:52.220889  500704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:21:52.226202  500704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:21:52.226276  500704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:21:52.280961  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:21:52.294053  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:21:52.309065  500704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:21:52.314650  500704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:21:52.314713  500704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:21:52.364445  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:21:52.372701  500704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:21:52.376587  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:21:52.418498  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:21:52.473423  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:21:52.545829  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:21:52.638387  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:21:52.719303  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:21:52.780474  500704 kubeadm.go:401] StartCluster: {Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:21:52.780568  500704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:21:52.780646  500704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:21:52.834497  500704 cri.go:89] found id: "727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe"
	I1129 10:21:52.834518  500704 cri.go:89] found id: "adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457"
	I1129 10:21:52.834523  500704 cri.go:89] found id: "2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7"
	I1129 10:21:52.834535  500704 cri.go:89] found id: "2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d"
	I1129 10:21:52.834538  500704 cri.go:89] found id: ""
	I1129 10:21:52.834627  500704 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:21:52.864644  500704 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:21:52Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:21:52.864717  500704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:21:52.900414  500704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:21:52.900489  500704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:21:52.900583  500704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:21:52.923660  500704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:21:52.924236  500704 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-708011" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:21:52.924438  500704 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-708011" cluster setting kubeconfig missing "embed-certs-708011" context setting]
	I1129 10:21:52.924785  500704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:52.926585  500704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:21:52.941602  500704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 10:21:52.941688  500704 kubeadm.go:602] duration metric: took 41.179616ms to restartPrimaryControlPlane
	I1129 10:21:52.941715  500704 kubeadm.go:403] duration metric: took 161.258042ms to StartCluster
	I1129 10:21:52.941772  500704 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:52.941872  500704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:21:52.943006  500704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:52.943324  500704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:21:52.943730  500704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:21:52.943815  500704 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-708011"
	I1129 10:21:52.943830  500704 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-708011"
	W1129 10:21:52.943849  500704 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:21:52.943872  500704 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:21:52.944473  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:52.945004  500704 config.go:182] Loaded profile config "embed-certs-708011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:21:52.945102  500704 addons.go:70] Setting dashboard=true in profile "embed-certs-708011"
	I1129 10:21:52.945139  500704 addons.go:239] Setting addon dashboard=true in "embed-certs-708011"
	W1129 10:21:52.945172  500704 addons.go:248] addon dashboard should already be in state true
	I1129 10:21:52.945215  500704 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:21:52.945827  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:52.952406  500704 out.go:179] * Verifying Kubernetes components...
	I1129 10:21:52.953099  500704 addons.go:70] Setting default-storageclass=true in profile "embed-certs-708011"
	I1129 10:21:52.953242  500704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-708011"
	I1129 10:21:52.954496  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:52.958226  500704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:52.995482  500704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:21:52.998370  500704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:21:53.001268  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:21:53.001314  500704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:21:53.001417  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:53.015621  500704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:21:53.018710  500704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:21:53.018736  500704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:21:53.018805  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:53.049993  500704 addons.go:239] Setting addon default-storageclass=true in "embed-certs-708011"
	W1129 10:21:53.050021  500704 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:21:53.050048  500704 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:21:53.050554  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:53.078630  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:53.096290  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:53.111370  500704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:21:53.111391  500704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:21:53.111458  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:53.139130  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.602642  501976 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 10:21:49.602866  501976 start.go:159] libmachine.API.Create for "no-preload-949993" (driver="docker")
	I1129 10:21:49.602890  501976 client.go:173] LocalClient.Create starting
	I1129 10:21:49.602963  501976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 10:21:49.602995  501976 main.go:143] libmachine: Decoding PEM data...
	I1129 10:21:49.603014  501976 main.go:143] libmachine: Parsing certificate...
	I1129 10:21:49.603072  501976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 10:21:49.603096  501976 main.go:143] libmachine: Decoding PEM data...
	I1129 10:21:49.603113  501976 main.go:143] libmachine: Parsing certificate...
	I1129 10:21:49.603463  501976 cli_runner.go:164] Run: docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 10:21:49.628940  501976 cli_runner.go:211] docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 10:21:49.629013  501976 network_create.go:284] running [docker network inspect no-preload-949993] to gather additional debugging logs...
	I1129 10:21:49.629029  501976 cli_runner.go:164] Run: docker network inspect no-preload-949993
	W1129 10:21:49.648358  501976 cli_runner.go:211] docker network inspect no-preload-949993 returned with exit code 1
	I1129 10:21:49.648387  501976 network_create.go:287] error running [docker network inspect no-preload-949993]: docker network inspect no-preload-949993: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-949993 not found
	I1129 10:21:49.648402  501976 network_create.go:289] output of [docker network inspect no-preload-949993]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-949993 not found
	
	** /stderr **
	I1129 10:21:49.648509  501976 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:21:49.675932  501976 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
	I1129 10:21:49.676301  501976 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf66364546bb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1a:25:6d:94:37:dd} reservation:<nil>}
	I1129 10:21:49.676530  501976 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d78444b552f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:b1:d6:7c:04:eb} reservation:<nil>}
	I1129 10:21:49.676992  501976 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7420}
	I1129 10:21:49.677017  501976 network_create.go:124] attempt to create docker network no-preload-949993 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 10:21:49.677078  501976 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-949993 no-preload-949993
	I1129 10:21:49.758803  501976 network_create.go:108] docker network no-preload-949993 192.168.76.0/24 created
	I1129 10:21:49.758834  501976 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-949993" container
	I1129 10:21:49.758910  501976 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 10:21:49.779807  501976 cli_runner.go:164] Run: docker volume create no-preload-949993 --label name.minikube.sigs.k8s.io=no-preload-949993 --label created_by.minikube.sigs.k8s.io=true
	I1129 10:21:49.803912  501976 oci.go:103] Successfully created a docker volume no-preload-949993
	I1129 10:21:49.803990  501976 cli_runner.go:164] Run: docker run --rm --name no-preload-949993-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-949993 --entrypoint /usr/bin/test -v no-preload-949993:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 10:21:50.456928  501976 oci.go:107] Successfully prepared a docker volume no-preload-949993
	I1129 10:21:50.456983  501976 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1129 10:21:50.457114  501976 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 10:21:50.457209  501976 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 10:21:50.539187  501976 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-949993 --name no-preload-949993 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-949993 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-949993 --network no-preload-949993 --ip 192.168.76.2 --volume no-preload-949993:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 10:21:50.914387  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Running}}
	I1129 10:21:50.962661  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:21:50.986141  501976 cli_runner.go:164] Run: docker exec no-preload-949993 stat /var/lib/dpkg/alternatives/iptables
	I1129 10:21:51.056600  501976 oci.go:144] the created container "no-preload-949993" has a running status.
	I1129 10:21:51.056625  501976 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa...
	I1129 10:21:52.186661  501976 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 10:21:52.207615  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:21:52.227840  501976 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 10:21:52.227859  501976 kic_runner.go:114] Args: [docker exec --privileged no-preload-949993 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 10:21:52.288495  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:21:52.315019  501976 machine.go:94] provisionDockerMachine start ...
	I1129 10:21:52.315102  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:52.355912  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:52.356380  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:52.356397  501976 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:21:52.357071  501976 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46092->127.0.0.1:33441: read: connection reset by peer
	I1129 10:21:53.496604  500704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:21:53.515130  500704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:21:53.599764  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:21:53.599786  500704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:21:53.606118  500704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:21:53.669583  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:21:53.669651  500704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:21:53.750386  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:21:53.750414  500704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:21:53.829995  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:21:53.830021  500704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:21:53.846679  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:21:53.846708  500704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:21:53.861141  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:21:53.861169  500704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:21:53.876970  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:21:53.876998  500704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:21:53.891849  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:21:53.891877  500704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:21:53.905837  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:21:53.905865  500704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:21:53.924457  500704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:21:55.538714  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:21:55.538787  501976 ubuntu.go:182] provisioning hostname "no-preload-949993"
	I1129 10:21:55.538898  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:55.567685  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:55.568004  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:55.568016  501976 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-949993 && echo "no-preload-949993" | sudo tee /etc/hostname
	I1129 10:21:55.776979  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:21:55.777150  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:55.810413  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:55.810777  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:55.810801  501976 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-949993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-949993/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-949993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:21:56.019163  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:21:56.019191  501976 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:21:56.019221  501976 ubuntu.go:190] setting up certificates
	I1129 10:21:56.019231  501976 provision.go:84] configureAuth start
	I1129 10:21:56.019301  501976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:21:56.051767  501976 provision.go:143] copyHostCerts
	I1129 10:21:56.051863  501976 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:21:56.051880  501976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:21:56.051962  501976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:21:56.052082  501976 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:21:56.052093  501976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:21:56.052125  501976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:21:56.052198  501976 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:21:56.052209  501976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:21:56.052236  501976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:21:56.052305  501976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.no-preload-949993 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-949993]
	I1129 10:21:56.570023  501976 provision.go:177] copyRemoteCerts
	I1129 10:21:56.570117  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:21:56.570170  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:56.611789  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:56.727463  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:21:56.761273  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:21:56.804014  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 10:21:56.836897  501976 provision.go:87] duration metric: took 817.652726ms to configureAuth
	I1129 10:21:56.836975  501976 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:21:56.837223  501976 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:21:56.837373  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:56.874226  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:56.874530  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:56.874546  501976 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:21:57.286460  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:21:57.286494  501976 machine.go:97] duration metric: took 4.971457554s to provisionDockerMachine
	I1129 10:21:57.286505  501976 client.go:176] duration metric: took 7.683609067s to LocalClient.Create
	I1129 10:21:57.286519  501976 start.go:167] duration metric: took 7.683654401s to libmachine.API.Create "no-preload-949993"
	I1129 10:21:57.286526  501976 start.go:293] postStartSetup for "no-preload-949993" (driver="docker")
	I1129 10:21:57.286551  501976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:21:57.286623  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:21:57.286678  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.308367  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.424909  501976 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:21:57.430750  501976 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:21:57.430779  501976 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:21:57.430799  501976 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:21:57.430859  501976 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:21:57.430953  501976 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:21:57.431069  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:21:57.444260  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:21:57.483785  501976 start.go:296] duration metric: took 197.243744ms for postStartSetup
	I1129 10:21:57.484232  501976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:21:57.509021  501976 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:21:57.509357  501976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:21:57.509407  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.540658  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.670473  501976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:21:57.678696  501976 start.go:128] duration metric: took 8.07940633s to createHost
	I1129 10:21:57.678735  501976 start.go:83] releasing machines lock for "no-preload-949993", held for 8.07954097s
	I1129 10:21:57.678819  501976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:21:57.707733  501976 ssh_runner.go:195] Run: cat /version.json
	I1129 10:21:57.707797  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.708033  501976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:21:57.708096  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.750881  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.759861  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.870070  501976 ssh_runner.go:195] Run: systemctl --version
	I1129 10:21:57.997357  501976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:21:58.089365  501976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:21:58.094857  501976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:21:58.094971  501976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:21:58.144844  501976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 10:21:58.144921  501976 start.go:496] detecting cgroup driver to use...
	I1129 10:21:58.144962  501976 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:21:58.145064  501976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:21:58.177507  501976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:21:58.194733  501976 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:21:58.194830  501976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:21:58.227319  501976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:21:58.248640  501976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:21:58.467119  501976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:21:58.684578  501976 docker.go:234] disabling docker service ...
	I1129 10:21:58.684702  501976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:21:58.728358  501976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:21:58.753439  501976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:21:58.975612  501976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:21:59.194680  501976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:21:59.224249  501976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:21:59.252517  501976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:21:59.252634  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.275696  501976 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:21:59.275816  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.290540  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.308804  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.325262  501976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:21:59.339436  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.355482  501976 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.380780  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.392850  501976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:21:59.405619  501976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:21:59.413931  501976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:59.633734  501976 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:21:59.871391  501976 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:21:59.871541  501976 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:21:59.882902  501976 start.go:564] Will wait 60s for crictl version
	I1129 10:21:59.883018  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:21:59.890737  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:21:59.939200  501976 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:21:59.939357  501976 ssh_runner.go:195] Run: crio --version
	I1129 10:22:00.004861  501976 ssh_runner.go:195] Run: crio --version
	I1129 10:22:00.179566  501976 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:21:59.110602  500704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.613922853s)
	I1129 10:21:59.110959  500704 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.595806777s)
	I1129 10:21:59.110991  500704 node_ready.go:35] waiting up to 6m0s for node "embed-certs-708011" to be "Ready" ...
	I1129 10:21:59.422567  500704 node_ready.go:49] node "embed-certs-708011" is "Ready"
	I1129 10:21:59.422651  500704 node_ready.go:38] duration metric: took 311.639365ms for node "embed-certs-708011" to be "Ready" ...
	I1129 10:21:59.422682  500704 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:21:59.422750  500704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:22:01.482517  500704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.876368423s)
	I1129 10:22:01.673929  500704 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.25111642s)
	I1129 10:22:01.673960  500704 api_server.go:72] duration metric: took 8.730573624s to wait for apiserver process to appear ...
	I1129 10:22:01.673966  500704 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:22:01.673984  500704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:22:01.674825  500704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.750328082s)
	I1129 10:22:01.677915  500704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-708011 addons enable metrics-server
	
	I1129 10:22:01.680759  500704 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1129 10:22:01.683785  500704 addons.go:530] duration metric: took 8.74005559s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1129 10:22:01.686877  500704 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:22:01.686904  500704 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:22:02.174180  500704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:22:02.195628  500704 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 10:22:02.198419  500704 api_server.go:141] control plane version: v1.34.1
	I1129 10:22:02.198443  500704 api_server.go:131] duration metric: took 524.470969ms to wait for apiserver health ...
	I1129 10:22:02.198452  500704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:22:02.209282  500704 system_pods.go:59] 8 kube-system pods found
	I1129 10:22:02.209321  500704 system_pods.go:61] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:22:02.209331  500704 system_pods.go:61] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:22:02.209336  500704 system_pods.go:61] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:22:02.209344  500704 system_pods.go:61] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:22:02.209352  500704 system_pods.go:61] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:22:02.209356  500704 system_pods.go:61] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:22:02.209362  500704 system_pods.go:61] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:22:02.209366  500704 system_pods.go:61] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Running
	I1129 10:22:02.209373  500704 system_pods.go:74] duration metric: took 10.91527ms to wait for pod list to return data ...
	I1129 10:22:02.209382  500704 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:22:02.215389  500704 default_sa.go:45] found service account: "default"
	I1129 10:22:02.215472  500704 default_sa.go:55] duration metric: took 6.083608ms for default service account to be created ...
	I1129 10:22:02.215498  500704 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:22:02.226434  500704 system_pods.go:86] 8 kube-system pods found
	I1129 10:22:02.226522  500704 system_pods.go:89] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:22:02.226548  500704 system_pods.go:89] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:22:02.226589  500704 system_pods.go:89] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:22:02.226616  500704 system_pods.go:89] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:22:02.226638  500704 system_pods.go:89] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:22:02.226675  500704 system_pods.go:89] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:22:02.226701  500704 system_pods.go:89] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:22:02.226732  500704 system_pods.go:89] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Running
	I1129 10:22:02.226768  500704 system_pods.go:126] duration metric: took 11.250954ms to wait for k8s-apps to be running ...
	I1129 10:22:02.226795  500704 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:22:02.226923  500704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:02.264284  500704 system_svc.go:56] duration metric: took 37.480833ms WaitForService to wait for kubelet
	I1129 10:22:02.264365  500704 kubeadm.go:587] duration metric: took 9.320977025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:22:02.264399  500704 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:22:02.284816  500704 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:22:02.284899  500704 node_conditions.go:123] node cpu capacity is 2
	I1129 10:22:02.284926  500704 node_conditions.go:105] duration metric: took 20.490406ms to run NodePressure ...
	I1129 10:22:02.284965  500704 start.go:242] waiting for startup goroutines ...
	I1129 10:22:02.284988  500704 start.go:247] waiting for cluster config update ...
	I1129 10:22:02.285012  500704 start.go:256] writing updated cluster config ...
	I1129 10:22:02.285307  500704 ssh_runner.go:195] Run: rm -f paused
	I1129 10:22:02.291454  500704 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:22:02.304732  500704 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5frc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:00.182800  501976 cli_runner.go:164] Run: docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:22:00.226663  501976 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:22:00.232923  501976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:22:00.251332  501976 kubeadm.go:884] updating cluster {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:22:00.251462  501976 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:22:00.251522  501976 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:22:00.326610  501976 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 10:22:00.326643  501976 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 10:22:00.326719  501976 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.326989  501976 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.327088  501976 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.327186  501976 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.327675  501976 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.327828  501976 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 10:22:00.328750  501976 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:00.328950  501976 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.329447  501976 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.330237  501976 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.330503  501976 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.330666  501976 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.330819  501976 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.331774  501976 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.331964  501976 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 10:22:00.332429  501976 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:00.657513  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1129 10:22:00.662898  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.669935  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.676141  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.681518  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.685133  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.694651  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.832698  501976 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1129 10:22:00.832790  501976 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 10:22:00.832861  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.928107  501976 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1129 10:22:00.928214  501976 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.928301  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.969603  501976 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1129 10:22:00.969704  501976 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.969785  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.969910  501976 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1129 10:22:00.969985  501976 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.970071  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.970016  501976 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1129 10:22:00.970229  501976 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.970299  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.984316  501976 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1129 10:22:00.984359  501976 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.984418  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.984495  501976 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1129 10:22:00.984515  501976 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.984543  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.984624  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 10:22:00.984684  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.996003  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.996075  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.996138  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:01.162782  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:01.162864  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:01.162929  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 10:22:01.162986  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:01.167581  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:01.167661  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:01.167712  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:01.399625  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 10:22:01.399707  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:01.399762  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:01.402185  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:01.402315  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:01.402371  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:01.402441  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:01.587929  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:01.588009  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1129 10:22:01.588084  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 10:22:01.588159  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:01.638360  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 10:22:01.638473  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 10:22:01.638536  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 10:22:01.638600  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 10:22:01.638698  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 10:22:01.638754  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 10:22:01.638815  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1129 10:22:01.638862  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 10:22:01.716607  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 10:22:01.716707  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 10:22:01.716783  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 10:22:01.716834  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 10:22:01.716885  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 10:22:01.716901  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1129 10:22:01.716958  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 10:22:01.716973  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1129 10:22:01.717011  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 10:22:01.717023  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1129 10:22:01.717063  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 10:22:01.717076  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1129 10:22:01.717125  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 10:22:01.717140  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	W1129 10:22:01.730483  501976 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1129 10:22:01.730684  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:01.805232  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 10:22:01.805284  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1129 10:22:01.805365  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 10:22:01.805389  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1129 10:22:01.902685  501976 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 10:22:01.902806  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1129 10:22:02.068833  501976 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1129 10:22:02.068883  501976 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:02.068951  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:02.496317  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:02.496425  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1129 10:22:02.725614  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:02.745151  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 10:22:02.745230  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 10:22:02.954329  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1129 10:22:04.337825  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:06.818418  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:05.036367  501976 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.082000864s)
	I1129 10:22:05.036413  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 10:22:05.036495  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1129 10:22:05.036603  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.291355111s)
	I1129 10:22:05.036629  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 10:22:05.036680  501976 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 10:22:05.036759  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1129 10:22:07.608967  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.572168293s)
	I1129 10:22:07.609044  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 10:22:07.609077  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 10:22:07.609158  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 10:22:07.609271  501976 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.572765822s)
	I1129 10:22:07.609305  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 10:22:07.609355  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1129 10:22:09.211271  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.602072971s)
	I1129 10:22:09.211295  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 10:22:09.211311  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 10:22:09.211359  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1129 10:22:09.315635  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:11.370342  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:11.084382  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.873000883s)
	I1129 10:22:11.084412  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 10:22:11.084431  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 10:22:11.084491  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 10:22:13.305263  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.220747528s)
	I1129 10:22:13.305291  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 10:22:13.305308  501976 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 10:22:13.305356  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1129 10:22:13.810151  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:15.811060  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:17.817233  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:18.254033  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.948651878s)
	I1129 10:22:18.254066  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 10:22:18.254121  501976 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 10:22:18.254198  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1129 10:22:18.921653  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 10:22:18.921692  501976 cache_images.go:125] Successfully loaded all cached images
	I1129 10:22:18.921698  501976 cache_images.go:94] duration metric: took 18.595042475s to LoadCachedImages
	I1129 10:22:18.921711  501976 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:22:18.921799  501976 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-949993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:22:18.921907  501976 ssh_runner.go:195] Run: crio config
	I1129 10:22:19.007982  501976 cni.go:84] Creating CNI manager for ""
	I1129 10:22:19.008059  501976 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:22:19.008092  501976 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:22:19.008152  501976 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-949993 NodeName:no-preload-949993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:22:19.008336  501976 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-949993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:22:19.008467  501976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:22:19.017281  501976 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 10:22:19.017371  501976 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 10:22:19.026274  501976 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1129 10:22:19.026408  501976 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256
	I1129 10:22:19.026461  501976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:19.026551  501976 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256
	I1129 10:22:19.026581  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 10:22:19.026616  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 10:22:19.032033  501976 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 10:22:19.032067  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1129 10:22:19.051436  501976 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 10:22:19.051474  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1129 10:22:19.051613  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 10:22:19.072940  501976 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 10:22:19.073020  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1129 10:22:19.962533  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:22:19.970344  501976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:22:19.985716  501976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:22:20.001389  501976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 10:22:20.023047  501976 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:22:20.027652  501976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:22:20.041950  501976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:22:20.168522  501976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:22:20.185858  501976 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993 for IP: 192.168.76.2
	I1129 10:22:20.185877  501976 certs.go:195] generating shared ca certs ...
	I1129 10:22:20.185894  501976 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.186040  501976 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:22:20.186125  501976 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:22:20.186134  501976 certs.go:257] generating profile certs ...
	I1129 10:22:20.186198  501976 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key
	I1129 10:22:20.186214  501976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt with IP's: []
	I1129 10:22:20.463083  501976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt ...
	I1129 10:22:20.463123  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt: {Name:mk4b581f7eb26bf54bbcc9fff9bb33d1486cf7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.463362  501976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key ...
	I1129 10:22:20.463378  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key: {Name:mk714e8a10132529e0b91fcdae06d626fc7556e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.463485  501976 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f
	I1129 10:22:20.463506  501976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 10:22:20.586570  501976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f ...
	I1129 10:22:20.586603  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f: {Name:mkf7d78d1b942aedb1b07bbb205304740db88aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.586797  501976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f ...
	I1129 10:22:20.586811  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f: {Name:mkf0d6e044f009e6ce32172ee6072ce3909aa312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.586897  501976 certs.go:382] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt
	I1129 10:22:20.586980  501976 certs.go:386] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key
	I1129 10:22:20.587045  501976 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key
	I1129 10:22:20.587063  501976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt with IP's: []
	I1129 10:22:20.885913  501976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt ...
	I1129 10:22:20.885946  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt: {Name:mka3ee1b3704a5d22582f4d70df1101ba6dea36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.886158  501976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key ...
	I1129 10:22:20.886181  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key: {Name:mk5fdf5d105235ce3b0a3b4223d2de2ec844c566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.886386  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:22:20.886435  501976 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:22:20.886449  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:22:20.886478  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:22:20.886513  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:22:20.886543  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:22:20.886591  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:22:20.887144  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:22:20.905564  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:22:20.929584  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:22:20.950528  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:22:20.970909  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:22:20.995523  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:22:21.015657  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:22:21.033943  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:22:21.052584  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:22:21.071956  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:22:21.089621  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:22:21.107728  501976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:22:21.121184  501976 ssh_runner.go:195] Run: openssl version
	I1129 10:22:21.130633  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:22:21.141405  501976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:22:21.145477  501976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:22:21.145542  501976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:22:21.189324  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:22:21.198387  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:22:21.206520  501976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:22:21.210480  501976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:22:21.210574  501976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:22:21.251358  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:22:21.259818  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:22:21.268054  501976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:22:21.271899  501976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:22:21.271966  501976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:22:21.312897  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:22:21.321443  501976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:22:21.325218  501976 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 10:22:21.325320  501976 kubeadm.go:401] StartCluster: {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:22:21.325406  501976 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:22:21.325484  501976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:22:21.356374  501976 cri.go:89] found id: ""
	I1129 10:22:21.356554  501976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:22:21.364645  501976 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 10:22:21.372614  501976 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 10:22:21.372714  501976 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 10:22:21.381024  501976 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 10:22:21.381086  501976 kubeadm.go:158] found existing configuration files:
	
	I1129 10:22:21.381146  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 10:22:21.389030  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 10:22:21.389123  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 10:22:21.396579  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 10:22:21.404825  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 10:22:21.404893  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 10:22:21.412406  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 10:22:21.420415  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 10:22:21.420538  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 10:22:21.428777  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 10:22:21.436764  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 10:22:21.436865  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 10:22:21.444387  501976 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 10:22:21.483202  501976 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 10:22:21.483447  501976 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 10:22:21.509939  501976 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 10:22:21.510037  501976 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 10:22:21.510142  501976 kubeadm.go:319] OS: Linux
	I1129 10:22:21.510212  501976 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 10:22:21.510275  501976 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 10:22:21.510337  501976 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 10:22:21.510405  501976 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 10:22:21.510467  501976 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 10:22:21.510534  501976 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 10:22:21.510594  501976 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 10:22:21.510648  501976 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 10:22:21.510709  501976 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 10:22:21.590613  501976 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 10:22:21.590744  501976 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 10:22:21.590851  501976 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 10:22:21.614845  501976 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1129 10:22:19.830446  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:22.319382  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:21.621800  501976 out.go:252]   - Generating certificates and keys ...
	I1129 10:22:21.621918  501976 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 10:22:21.621998  501976 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 10:22:21.954032  501976 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 10:22:22.810720  501976 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 10:22:23.091065  501976 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 10:22:23.261826  501976 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 10:22:23.316753  501976 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 10:22:23.317046  501976 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-949993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 10:22:23.723567  501976 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 10:22:23.723986  501976 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-949993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 10:22:23.857951  501976 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 10:22:24.238097  501976 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 10:22:24.504202  501976 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:22:24.504501  501976 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:22:24.811358  501976 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:22:25.289001  501976 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 10:22:26.403623  501976 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:22:27.147284  501976 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:22:27.762029  501976 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:22:27.762626  501976 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:22:27.766720  501976 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1129 10:22:24.814129  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:27.341823  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:27.770195  501976 out.go:252]   - Booting up control plane ...
	I1129 10:22:27.770304  501976 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:22:27.770388  501976 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:22:27.771511  501976 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:22:27.789358  501976 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:22:27.789468  501976 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 10:22:27.797073  501976 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 10:22:27.797405  501976 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:22:27.797451  501976 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:22:27.940798  501976 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 10:22:27.940920  501976 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1129 10:22:29.810759  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:32.317582  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:29.941179  501976 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000710001s
	I1129 10:22:29.945156  501976 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 10:22:29.945547  501976 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 10:22:29.945892  501976 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 10:22:29.946627  501976 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 10:22:33.446553  501976 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.49970414s
	I1129 10:22:35.535831  501976 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.588752495s
	I1129 10:22:36.448320  501976 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50192286s
	I1129 10:22:36.472744  501976 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:22:36.489204  501976 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:22:36.505277  501976 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:22:36.505483  501976 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-949993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:22:36.517801  501976 kubeadm.go:319] [bootstrap-token] Using token: l5g4yc.1v5lim4xatob3w56
	I1129 10:22:36.520777  501976 out.go:252]   - Configuring RBAC rules ...
	I1129 10:22:36.520905  501976 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:22:36.528179  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:22:36.539590  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:22:36.547488  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:22:36.551621  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:22:36.558681  501976 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:22:36.857162  501976 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:22:37.336941  501976 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:22:37.855570  501976 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:22:37.856863  501976 kubeadm.go:319] 
	I1129 10:22:37.856941  501976 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:22:37.856952  501976 kubeadm.go:319] 
	I1129 10:22:37.857030  501976 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:22:37.857037  501976 kubeadm.go:319] 
	I1129 10:22:37.857062  501976 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:22:37.857130  501976 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:22:37.857184  501976 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:22:37.857191  501976 kubeadm.go:319] 
	I1129 10:22:37.857245  501976 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:22:37.857253  501976 kubeadm.go:319] 
	I1129 10:22:37.857309  501976 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:22:37.857315  501976 kubeadm.go:319] 
	I1129 10:22:37.857366  501976 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:22:37.857449  501976 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:22:37.857522  501976 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:22:37.857529  501976 kubeadm.go:319] 
	I1129 10:22:37.857620  501976 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:22:37.857699  501976 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:22:37.857707  501976 kubeadm.go:319] 
	I1129 10:22:37.857791  501976 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token l5g4yc.1v5lim4xatob3w56 \
	I1129 10:22:37.857896  501976 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:22:37.857921  501976 kubeadm.go:319] 	--control-plane 
	I1129 10:22:37.857928  501976 kubeadm.go:319] 
	I1129 10:22:37.858014  501976 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:22:37.858022  501976 kubeadm.go:319] 
	I1129 10:22:37.858135  501976 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token l5g4yc.1v5lim4xatob3w56 \
	I1129 10:22:37.858250  501976 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:22:37.863120  501976 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 10:22:37.863353  501976 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:22:37.863476  501976 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:22:37.863501  501976 cni.go:84] Creating CNI manager for ""
	I1129 10:22:37.863510  501976 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:22:37.866561  501976 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1129 10:22:34.812769  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:37.319023  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:37.869450  501976 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:22:37.876313  501976 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 10:22:37.876334  501976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:22:37.897975  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:22:38.210160  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:38.210283  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-949993 minikube.k8s.io/updated_at=2025_11_29T10_22_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=no-preload-949993 minikube.k8s.io/primary=true
	I1129 10:22:38.210044  501976 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:22:38.556793  501976 ops.go:34] apiserver oom_adj: -16
	I1129 10:22:38.556923  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:39.057048  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:38.810477  500704 pod_ready.go:94] pod "coredns-66bc5c9577-5frc4" is "Ready"
	I1129 10:22:38.810506  500704 pod_ready.go:86] duration metric: took 36.505705488s for pod "coredns-66bc5c9577-5frc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.813348  500704 pod_ready.go:83] waiting for pod "etcd-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.818383  500704 pod_ready.go:94] pod "etcd-embed-certs-708011" is "Ready"
	I1129 10:22:38.818457  500704 pod_ready.go:86] duration metric: took 5.083334ms for pod "etcd-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.820731  500704 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.825624  500704 pod_ready.go:94] pod "kube-apiserver-embed-certs-708011" is "Ready"
	I1129 10:22:38.825658  500704 pod_ready.go:86] duration metric: took 4.846769ms for pod "kube-apiserver-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.828449  500704 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.008800  500704 pod_ready.go:94] pod "kube-controller-manager-embed-certs-708011" is "Ready"
	I1129 10:22:39.008831  500704 pod_ready.go:86] duration metric: took 180.356573ms for pod "kube-controller-manager-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.207960  500704 pod_ready.go:83] waiting for pod "kube-proxy-phs6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.608135  500704 pod_ready.go:94] pod "kube-proxy-phs6g" is "Ready"
	I1129 10:22:39.608166  500704 pod_ready.go:86] duration metric: took 400.176887ms for pod "kube-proxy-phs6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.808750  500704 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:40.208519  500704 pod_ready.go:94] pod "kube-scheduler-embed-certs-708011" is "Ready"
	I1129 10:22:40.208560  500704 pod_ready.go:86] duration metric: took 399.740049ms for pod "kube-scheduler-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:40.208574  500704 pod_ready.go:40] duration metric: took 37.91704367s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:22:40.266261  500704 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:22:40.271290  500704 out.go:179] * Done! kubectl is now configured to use "embed-certs-708011" cluster and "default" namespace by default
	I1129 10:22:39.557113  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:40.057492  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:40.557707  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:41.056992  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:41.557331  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:42.057866  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:42.253622  501976 kubeadm.go:1114] duration metric: took 4.043540405s to wait for elevateKubeSystemPrivileges
	I1129 10:22:42.253650  501976 kubeadm.go:403] duration metric: took 20.92833643s to StartCluster
	I1129 10:22:42.253668  501976 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:42.253730  501976 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:22:42.255371  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:42.255635  501976 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:22:42.255862  501976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:22:42.256167  501976 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:22:42.256214  501976 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:22:42.256277  501976 addons.go:70] Setting storage-provisioner=true in profile "no-preload-949993"
	I1129 10:22:42.256292  501976 addons.go:239] Setting addon storage-provisioner=true in "no-preload-949993"
	I1129 10:22:42.256373  501976 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:22:42.256953  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:22:42.258295  501976 addons.go:70] Setting default-storageclass=true in profile "no-preload-949993"
	I1129 10:22:42.258332  501976 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-949993"
	I1129 10:22:42.258662  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:22:42.259364  501976 out.go:179] * Verifying Kubernetes components...
	I1129 10:22:42.262237  501976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:22:42.296552  501976 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:42.301361  501976 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:22:42.301389  501976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:22:42.301478  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:22:42.303972  501976 addons.go:239] Setting addon default-storageclass=true in "no-preload-949993"
	I1129 10:22:42.304015  501976 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:22:42.304452  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:22:42.343585  501976 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:22:42.343624  501976 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:22:42.343711  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:22:42.354263  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:22:42.387880  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:22:42.648189  501976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:22:42.663978  501976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:22:42.693219  501976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:22:42.724420  501976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:22:43.375159  501976 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 10:22:43.376313  501976 node_ready.go:35] waiting up to 6m0s for node "no-preload-949993" to be "Ready" ...
	I1129 10:22:43.427029  501976 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 10:22:43.429985  501976 addons.go:530] duration metric: took 1.173754136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 10:22:43.880661  501976 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-949993" context rescaled to 1 replicas
	W1129 10:22:45.381091  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	W1129 10:22:47.381541  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	W1129 10:22:49.381841  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	W1129 10:22:51.882950  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.43810367Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.444124967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.444221862Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.444272054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.448485232Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.448522795Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.448549881Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.453767041Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.45392747Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.454009891Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.459389885Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.459579114Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.99288517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a6a1c89b-e76b-40d3-94f5-529ac8ea3072 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.993910733Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f80849d6-c017-4a64-8185-336d6189cca3 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.995233817Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper" id=3dd0066a-1670-4d19-8c53-0b69c57143ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.995324362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.008719621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.009357101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.039331263Z" level=info msg="Created container 05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper" id=3dd0066a-1670-4d19-8c53-0b69c57143ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.040125693Z" level=info msg="Starting container: 05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025" id=bddc51c4-d854-4c10-97fc-9b88d525a7c3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.047289691Z" level=info msg="Started container" PID=1746 containerID=05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper id=bddc51c4-d854-4c10-97fc-9b88d525a7c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0bf9fd51cf12f24ba39875c9f7bc3724d93bb93b8f7b1162d93ee2d387d3022f
	Nov 29 10:22:48 embed-certs-708011 conmon[1744]: conmon 05c5efef0f56ea5f8f85 <ninfo>: container 1746 exited with status 1
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.580943573Z" level=info msg="Removing container: 275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c" id=9e5fbe66-99e5-4ae2-9087-d9e554d18a3c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.591894069Z" level=info msg="Error loading conmon cgroup of container 275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c: cgroup deleted" id=9e5fbe66-99e5-4ae2-9087-d9e554d18a3c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.595152142Z" level=info msg="Removed container 275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper" id=9e5fbe66-99e5-4ae2-9087-d9e554d18a3c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	05c5efef0f56e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   0bf9fd51cf12f       dashboard-metrics-scraper-6ffb444bf9-2q2nz   kubernetes-dashboard
	52238fd1860d9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   ac59ac41f805f       storage-provisioner                          kube-system
	4c2d7f74c7191       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   9d50b825c1851       kubernetes-dashboard-855c9754f9-7sxs9        kubernetes-dashboard
	015314d00d4b2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   12607321302e1       coredns-66bc5c9577-5frc4                     kube-system
	26abb8ba87208       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   69fe5fcb37fe6       busybox                                      default
	826e09f43d9ea       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   e1648b5370677       kube-proxy-phs6g                             kube-system
	b95113cddf489       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   f4f71385e2664       kindnet-wfvvz                                kube-system
	e1fb3814acf19       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   ac59ac41f805f       storage-provisioner                          kube-system
	727bfd303dcd3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   68cb9ddee48af       kube-controller-manager-embed-certs-708011   kube-system
	adedb317fa6de       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4f3ea82e0d3d8       etcd-embed-certs-708011                      kube-system
	2aaa1ea4482b2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   7c4733c7e63c7       kube-apiserver-embed-certs-708011            kube-system
	2ef08f65fca11       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0e5fd67fe1657       kube-scheduler-embed-certs-708011            kube-system
	
	
	==> coredns [015314d00d4b2a6f4bfa3c12ab4ce66f4a0c69af043fdc307314c31e97739e05] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51282 - 49691 "HINFO IN 472883754259937002.1044523959517425734. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00447586s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-708011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-708011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-708011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_20_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:20:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-708011
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:22:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:21:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-708011
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1f99ece5-e15d-4bbe-acc3-9db5d863dc89
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-5frc4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-embed-certs-708011                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-wfvvz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-708011             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-embed-certs-708011    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-phs6g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-embed-certs-708011             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2q2nz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7sxs9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m23s                  node-controller  Node embed-certs-708011 event: Registered Node embed-certs-708011 in Controller
	  Normal   NodeReady                101s                   kubelet          Node embed-certs-708011 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 65s)      kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 65s)      kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 65s)      kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-708011 event: Registered Node embed-certs-708011 in Controller
	
	
	==> dmesg <==
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457] <==
	{"level":"warn","ts":"2025-11-29T10:21:55.998214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.024280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.067368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.102426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.146302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.183553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.220579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.289413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.327481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.403245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.446304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.475183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.524015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.562609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.610157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.667948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.700769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.766883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.832686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.894616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.989492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.021665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.055074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.086997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.199550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33070","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:22:56 up  3:05,  0 user,  load average: 4.39, 3.32, 2.60
	Linux embed-certs-708011 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b95113cddf48914a845a48ab3e34e7b56e9e981136414866952a96b8bd38b29c] <==
	I1129 10:22:00.076514       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:22:00.136609       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:22:00.136796       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:22:00.136812       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:22:00.136825       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:22:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:22:00.428636       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:22:00.428666       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:22:00.428677       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:22:00.429396       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:22:30.429295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:22:30.429414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:22:30.429732       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 10:22:30.429853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1129 10:22:31.729365       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:22:31.729463       1 metrics.go:72] Registering metrics
	I1129 10:22:31.735660       1 controller.go:711] "Syncing nftables rules"
	I1129 10:22:40.429844       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:22:40.429989       1 main.go:301] handling current node
	I1129 10:22:50.429085       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:22:50.429119       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7] <==
	I1129 10:21:58.791408       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 10:21:58.791415       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:21:58.799585       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:21:58.804566       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 10:21:58.822376       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:21:58.822732       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:21:58.868101       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 10:21:58.891497       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:21:58.892025       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:21:58.904107       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:21:58.904686       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1129 10:21:58.944948       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:21:58.963479       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 10:21:58.964585       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:21:59.136843       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:21:59.509299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:22:00.763666       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:22:01.040673       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:22:01.200000       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:22:01.270542       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:22:01.602420       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.104"}
	I1129 10:22:01.659773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.15.98"}
	I1129 10:22:03.747587       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:22:04.125716       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:22:04.303505       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe] <==
	I1129 10:22:03.661322       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:22:03.663701       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:22:03.663743       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:22:03.663754       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:22:03.670721       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 10:22:03.671324       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:22:03.681793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:22:03.681924       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 10:22:03.682025       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 10:22:03.682100       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 10:22:03.684307       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 10:22:03.684384       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 10:22:03.687933       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:22:03.687968       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:22:03.688168       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:22:03.688191       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:22:03.689423       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:22:03.690897       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:22:03.693178       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:22:03.701191       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 10:22:03.706457       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 10:22:03.716827       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:22:03.718069       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:22:04.338047       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1129 10:22:04.338162       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [826e09f43d9eac658b0cdc43a8652be4cf6343ebad98975ea0ab65ac30ac2604] <==
	I1129 10:22:01.788537       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:22:01.966801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:22:02.095754       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:22:02.095858       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 10:22:02.095954       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:22:02.139561       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:22:02.139622       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:22:02.151494       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:22:02.151943       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:22:02.154441       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:22:02.156873       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:22:02.156892       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:22:02.157246       1 config.go:200] "Starting service config controller"
	I1129 10:22:02.157254       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:22:02.157551       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:22:02.157559       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:22:02.158026       1 config.go:309] "Starting node config controller"
	I1129 10:22:02.158047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:22:02.158056       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:22:02.264608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 10:22:02.264756       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:22:02.331393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d] <==
	I1129 10:21:55.745920       1 serving.go:386] Generated self-signed cert in-memory
	I1129 10:22:01.958231       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:22:01.958265       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:22:01.968106       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 10:22:01.968219       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 10:22:01.968319       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:22:01.972598       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:22:01.968343       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:22:01.968378       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:22:01.968392       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:22:01.973075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:22:02.070045       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1129 10:22:02.077242       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:22:02.079035       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:22:05 embed-certs-708011 kubelet[791]: W1129 10:22:05.033968     791 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/crio-9d50b825c1851713c9bfd41a23a261f1c51d7f6dda64faa3496b80de0b69578f WatchSource:0}: Error finding container 9d50b825c1851713c9bfd41a23a261f1c51d7f6dda64faa3496b80de0b69578f: Status 404 returned error can't find the container with id 9d50b825c1851713c9bfd41a23a261f1c51d7f6dda64faa3496b80de0b69578f
	Nov 29 10:22:08 embed-certs-708011 kubelet[791]: I1129 10:22:08.412619     791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 10:22:11 embed-certs-708011 kubelet[791]: I1129 10:22:11.460796     791 scope.go:117] "RemoveContainer" containerID="cea0412603481947e86ae366b5fb596021f5da5ba858ad77023a74639cfaad43"
	Nov 29 10:22:12 embed-certs-708011 kubelet[791]: I1129 10:22:12.459596     791 scope.go:117] "RemoveContainer" containerID="cea0412603481947e86ae366b5fb596021f5da5ba858ad77023a74639cfaad43"
	Nov 29 10:22:12 embed-certs-708011 kubelet[791]: I1129 10:22:12.459914     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:12 embed-certs-708011 kubelet[791]: E1129 10:22:12.460096     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:13 embed-certs-708011 kubelet[791]: I1129 10:22:13.465647     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:13 embed-certs-708011 kubelet[791]: E1129 10:22:13.465812     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:14 embed-certs-708011 kubelet[791]: I1129 10:22:14.690901     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:14 embed-certs-708011 kubelet[791]: E1129 10:22:14.691103     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:26 embed-certs-708011 kubelet[791]: I1129 10:22:26.991626     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: I1129 10:22:27.516085     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: I1129 10:22:27.516807     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: E1129 10:22:27.523755     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: I1129 10:22:27.548271     791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7sxs9" podStartSLOduration=11.130343447 podStartE2EDuration="23.548251163s" podCreationTimestamp="2025-11-29 10:22:04 +0000 UTC" firstStartedPulling="2025-11-29 10:22:05.039002204 +0000 UTC m=+13.501063780" lastFinishedPulling="2025-11-29 10:22:17.456909912 +0000 UTC m=+25.918971496" observedRunningTime="2025-11-29 10:22:18.522024523 +0000 UTC m=+26.984086107" watchObservedRunningTime="2025-11-29 10:22:27.548251163 +0000 UTC m=+36.010312747"
	Nov 29 10:22:31 embed-certs-708011 kubelet[791]: I1129 10:22:31.531609     791 scope.go:117] "RemoveContainer" containerID="e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06"
	Nov 29 10:22:34 embed-certs-708011 kubelet[791]: I1129 10:22:34.691663     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:34 embed-certs-708011 kubelet[791]: E1129 10:22:34.691844     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:47 embed-certs-708011 kubelet[791]: I1129 10:22:47.991971     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:48 embed-certs-708011 kubelet[791]: I1129 10:22:48.579150     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:48 embed-certs-708011 kubelet[791]: I1129 10:22:48.579465     791 scope.go:117] "RemoveContainer" containerID="05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025"
	Nov 29 10:22:48 embed-certs-708011 kubelet[791]: E1129 10:22:48.579618     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:52 embed-certs-708011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:22:52 embed-certs-708011 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:22:52 embed-certs-708011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4c2d7f74c7191244238581233d9fc0da4fd49058d128ed3fc102d7709d1e9f02] <==
	2025/11/29 10:22:17 Using namespace: kubernetes-dashboard
	2025/11/29 10:22:17 Using in-cluster config to connect to apiserver
	2025/11/29 10:22:17 Using secret token for csrf signing
	2025/11/29 10:22:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:22:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:22:17 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 10:22:17 Generating JWE encryption key
	2025/11/29 10:22:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:22:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:22:18 Initializing JWE encryption key from synchronized object
	2025/11/29 10:22:18 Creating in-cluster Sidecar client
	2025/11/29 10:22:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:22:18 Serving insecurely on HTTP port: 9090
	2025/11/29 10:22:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:22:17 Starting overwatch
	
	
	==> storage-provisioner [52238fd1860d97914086f8137b7f7e753619496f7c2af91747d0ec80d787605a] <==
	I1129 10:22:31.628310       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:22:31.659514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:22:31.659645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:22:31.662946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:35.117776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:39.377516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:42.976793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:46.031424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:49.053306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:49.058175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:22:49.058327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:22:49.058828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8f51354-af5a-4f25-a98a-e2bfddbbd579", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-708011_0356dd93-f70a-489f-bd37-d80aa188eb6b became leader
	I1129 10:22:49.058877       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-708011_0356dd93-f70a-489f-bd37-d80aa188eb6b!
	W1129 10:22:49.061624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:49.070350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:22:49.159786       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-708011_0356dd93-f70a-489f-bd37-d80aa188eb6b!
	W1129 10:22:51.074721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:51.079643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:53.084820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:53.103114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:55.107196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:55.114271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06] <==
	I1129 10:22:00.649386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:22:30.651496       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-708011 -n embed-certs-708011
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-708011 -n embed-certs-708011: exit status 2 (535.679338ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-708011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-708011
helpers_test.go:243: (dbg) docker inspect embed-certs-708011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a",
	        "Created": "2025-11-29T10:20:04.082616861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:21:43.917067482Z",
	            "FinishedAt": "2025-11-29T10:21:42.689938476Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/hosts",
	        "LogPath": "/var/lib/docker/containers/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a-json.log",
	        "Name": "/embed-certs-708011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-708011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-708011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a",
	                "LowerDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f06ca9d640d18565437b627de70398948d99d2466e1a09248d36a55c2ce67fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-708011",
	                "Source": "/var/lib/docker/volumes/embed-certs-708011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-708011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-708011",
	                "name.minikube.sigs.k8s.io": "embed-certs-708011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e4e911abcf81853b1352e4308648328f71985deafffaac611ad99d8d699eea4",
	            "SandboxKey": "/var/run/docker/netns/0e4e911abcf8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-708011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:9f:09:26:3b:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71caade6f8e792ec8d9dce1f07288f08e50f74b2f8fdf0dbf488e545467ec977",
	                    "EndpointID": "0fef8f051280f7c71e0c43243e434592bfc22f3dfa66a19ef8559242ffe617cb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-708011",
	                        "f6641e3603d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011: exit status 2 (427.275732ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-708011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-708011 logs -n 25: (1.337025296s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p running-upgrade-493711                                                                                                                                                                                                                     │ running-upgrade-493711       │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:16 UTC │
	│ start   │ -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:16 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-033056 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-033056 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-033056                                                                                                                                                                                                                        │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-685516 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-685516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:19 UTC │
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                                                                                               │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ stop    │ -p embed-certs-708011 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ delete  │ -p cert-expiration-930117                                                                                                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-259491                                                                                                                                                                                                               │ disable-driver-mounts-259491 │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                                                                                                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:21:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:21:49.265071  501976 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:21:49.265224  501976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:21:49.265231  501976 out.go:374] Setting ErrFile to fd 2...
	I1129 10:21:49.265236  501976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:21:49.265489  501976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:21:49.265890  501976 out.go:368] Setting JSON to false
	I1129 10:21:49.266854  501976 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11059,"bootTime":1764400651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:21:49.266920  501976 start.go:143] virtualization:  
	I1129 10:21:49.270881  501976 out.go:179] * [no-preload-949993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:21:49.274294  501976 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:21:49.274356  501976 notify.go:221] Checking for updates...
	I1129 10:21:49.280580  501976 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:21:49.283618  501976 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:21:49.286549  501976 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:21:49.289477  501976 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:21:49.292650  501976 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:21:49.296285  501976 config.go:182] Loaded profile config "embed-certs-708011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:21:49.296445  501976 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:21:49.334262  501976 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:21:49.334394  501976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:21:49.441198  501976 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:21:49.431554622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:21:49.441295  501976 docker.go:319] overlay module found
	I1129 10:21:49.444546  501976 out.go:179] * Using the docker driver based on user configuration
	I1129 10:21:49.447474  501976 start.go:309] selected driver: docker
	I1129 10:21:49.447505  501976 start.go:927] validating driver "docker" against <nil>
	I1129 10:21:49.447520  501976 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:21:49.448233  501976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:21:49.559710  501976 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:21:49.545511622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:21:49.559861  501976 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 10:21:49.560082  501976 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:21:49.562991  501976 out.go:179] * Using Docker driver with root privileges
	I1129 10:21:49.565804  501976 cni.go:84] Creating CNI manager for ""
	I1129 10:21:49.565876  501976 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:21:49.565885  501976 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 10:21:49.565959  501976 start.go:353] cluster config:
	{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:21:49.569093  501976 out.go:179] * Starting "no-preload-949993" primary control-plane node in "no-preload-949993" cluster
	I1129 10:21:49.571878  501976 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:21:49.574795  501976 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:21:49.577556  501976 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:21:49.577677  501976 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:21:49.577710  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json: {Name:mk358b56b7fe514be101ec22fbf5f7b1feeb0ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:49.577896  501976 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:21:49.578242  501976 cache.go:107] acquiring lock: {Name:mk7e036f21c3fa53998769ec8ca8e9d0cc43797a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578314  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 10:21:49.578322  501976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.406µs
	I1129 10:21:49.578334  501976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 10:21:49.578345  501976 cache.go:107] acquiring lock: {Name:mk55e5c5c1d216b13668659dfb1a1298483fe357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578376  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 10:21:49.578382  501976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.277µs
	I1129 10:21:49.578388  501976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 10:21:49.578397  501976 cache.go:107] acquiring lock: {Name:mk79de74aa677651359631e14e64f02dbae72c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578429  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 10:21:49.578434  501976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 39.122µs
	I1129 10:21:49.578440  501976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 10:21:49.578449  501976 cache.go:107] acquiring lock: {Name:mk3420fbe5609e73633731fff1b3352eed3a8d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578478  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 10:21:49.578483  501976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 34.544µs
	I1129 10:21:49.578488  501976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 10:21:49.578507  501976 cache.go:107] acquiring lock: {Name:mkec0dc08372453f12658d7249505bdb38e0468a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578534  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 10:21:49.578539  501976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 33.28µs
	I1129 10:21:49.578544  501976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 10:21:49.578553  501976 cache.go:107] acquiring lock: {Name:mkb12ce0a127601415f42976e337ea76e82915af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578578  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1129 10:21:49.578582  501976 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.679µs
	I1129 10:21:49.578587  501976 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 10:21:49.578601  501976 cache.go:107] acquiring lock: {Name:mkc2341e09a949f9273b1d33b0a3b4021526fa7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578626  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 10:21:49.578630  501976 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 35.906µs
	I1129 10:21:49.578636  501976 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 10:21:49.578644  501976 cache.go:107] acquiring lock: {Name:mk0167a0bfcd689b945be8d473d2efef87ce9fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.578669  501976 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 10:21:49.578673  501976 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.237µs
	I1129 10:21:49.578678  501976 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 10:21:49.578684  501976 cache.go:87] Successfully saved all images to host disk.
	I1129 10:21:49.599024  501976 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:21:49.599043  501976 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:21:49.599057  501976 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:21:49.599087  501976 start.go:360] acquireMachinesLock for no-preload-949993: {Name:mk6ff94a11813e006c209466e9cbb5aadf7ae1bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:21:49.599183  501976 start.go:364] duration metric: took 80.222µs to acquireMachinesLock for "no-preload-949993"
	I1129 10:21:49.599210  501976 start.go:93] Provisioning new machine with config: &{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:21:49.599275  501976 start.go:125] createHost starting for "" (driver="docker")
	I1129 10:21:48.685952  500704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:21:48.685972  500704 machine.go:97] duration metric: took 4.38105876s to provisionDockerMachine
	I1129 10:21:48.685984  500704 start.go:293] postStartSetup for "embed-certs-708011" (driver="docker")
	I1129 10:21:48.685995  500704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:21:48.686070  500704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:21:48.686135  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:48.717516  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:48.854265  500704 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:21:48.857813  500704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:21:48.857841  500704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:21:48.857852  500704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:21:48.857912  500704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:21:48.858000  500704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:21:48.858148  500704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:21:48.866324  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:21:48.886427  500704 start.go:296] duration metric: took 200.427369ms for postStartSetup
	I1129 10:21:48.886527  500704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:21:48.886578  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:48.914159  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.023500  500704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:21:49.028855  500704 fix.go:56] duration metric: took 5.197571787s for fixHost
	I1129 10:21:49.028883  500704 start.go:83] releasing machines lock for "embed-certs-708011", held for 5.197619959s
	I1129 10:21:49.028963  500704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-708011
	I1129 10:21:49.046939  500704 ssh_runner.go:195] Run: cat /version.json
	I1129 10:21:49.046985  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:49.047021  500704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:21:49.047072  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:49.073571  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.077827  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.328539  500704 ssh_runner.go:195] Run: systemctl --version
	I1129 10:21:49.335502  500704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:21:49.379038  500704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:21:49.384009  500704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:21:49.384084  500704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:21:49.393348  500704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:21:49.393374  500704 start.go:496] detecting cgroup driver to use...
	I1129 10:21:49.393406  500704 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:21:49.393463  500704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:21:49.410534  500704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:21:49.431149  500704 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:21:49.431214  500704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:21:49.453131  500704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:21:49.473390  500704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:21:49.627380  500704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:21:49.794820  500704 docker.go:234] disabling docker service ...
	I1129 10:21:49.794949  500704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:21:49.811462  500704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:21:49.828309  500704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:21:49.990148  500704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:21:50.180945  500704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:21:50.196830  500704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:21:50.230144  500704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:21:50.230232  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.250536  500704 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:21:50.250611  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.264034  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.276285  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.290857  500704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:21:50.304652  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.316890  500704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.331852  500704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:50.358542  500704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:21:50.379460  500704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:21:50.388513  500704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:50.542496  500704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:21:50.765924  500704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:21:50.766007  500704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:21:50.770894  500704 start.go:564] Will wait 60s for crictl version
	I1129 10:21:50.770967  500704 ssh_runner.go:195] Run: which crictl
	I1129 10:21:50.775452  500704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:21:50.811556  500704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:21:50.811689  500704 ssh_runner.go:195] Run: crio --version
	I1129 10:21:50.842901  500704 ssh_runner.go:195] Run: crio --version
	I1129 10:21:50.898181  500704 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:21:50.901392  500704 cli_runner.go:164] Run: docker network inspect embed-certs-708011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:21:50.936721  500704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 10:21:50.941226  500704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:21:50.961655  500704 kubeadm.go:884] updating cluster {Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:21:50.961795  500704 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:21:50.961942  500704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:21:51.031859  500704 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:21:51.031882  500704 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:21:51.031940  500704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:21:51.085813  500704 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:21:51.085838  500704 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:21:51.085845  500704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 10:21:51.085955  500704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-708011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:21:51.086031  500704 ssh_runner.go:195] Run: crio config
	I1129 10:21:51.177424  500704 cni.go:84] Creating CNI manager for ""
	I1129 10:21:51.177449  500704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:21:51.177504  500704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:21:51.177539  500704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-708011 NodeName:embed-certs-708011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:21:51.177735  500704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-708011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:21:51.177823  500704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:21:51.191666  500704 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:21:51.191747  500704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:21:51.202843  500704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1129 10:21:51.224895  500704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:21:51.251533  500704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1129 10:21:51.274654  500704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:21:51.279866  500704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:21:51.293203  500704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:51.504346  500704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:21:51.542481  500704 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011 for IP: 192.168.85.2
	I1129 10:21:51.542500  500704 certs.go:195] generating shared ca certs ...
	I1129 10:21:51.542515  500704 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:51.542664  500704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:21:51.542702  500704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:21:51.542708  500704 certs.go:257] generating profile certs ...
	I1129 10:21:51.542795  500704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/client.key
	I1129 10:21:51.542861  500704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key.704f8259
	I1129 10:21:51.542909  500704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key
	I1129 10:21:51.543026  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:21:51.543054  500704 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:21:51.543061  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:21:51.543086  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:21:51.543111  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:21:51.543139  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:21:51.543181  500704 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:21:51.543746  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:21:51.591663  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:21:51.630446  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:21:51.671101  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:21:51.711903  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 10:21:51.761012  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 10:21:51.900958  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:21:51.984906  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/embed-certs-708011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 10:21:52.031290  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:21:52.052977  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:21:52.078132  500704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:21:52.111633  500704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:21:52.126807  500704 ssh_runner.go:195] Run: openssl version
	I1129 10:21:52.133630  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:21:52.142998  500704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:21:52.149549  500704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:21:52.149629  500704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:21:52.198511  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:21:52.207512  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:21:52.220889  500704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:21:52.226202  500704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:21:52.226276  500704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:21:52.280961  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:21:52.294053  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:21:52.309065  500704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:21:52.314650  500704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:21:52.314713  500704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:21:52.364445  500704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:21:52.372701  500704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:21:52.376587  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:21:52.418498  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:21:52.473423  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:21:52.545829  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:21:52.638387  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:21:52.719303  500704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:21:52.780474  500704 kubeadm.go:401] StartCluster: {Name:embed-certs-708011 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-708011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:21:52.780568  500704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:21:52.780646  500704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:21:52.834497  500704 cri.go:89] found id: "727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe"
	I1129 10:21:52.834518  500704 cri.go:89] found id: "adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457"
	I1129 10:21:52.834523  500704 cri.go:89] found id: "2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7"
	I1129 10:21:52.834535  500704 cri.go:89] found id: "2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d"
	I1129 10:21:52.834538  500704 cri.go:89] found id: ""
	I1129 10:21:52.834627  500704 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:21:52.864644  500704 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:21:52Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:21:52.864717  500704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:21:52.900414  500704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:21:52.900489  500704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:21:52.900583  500704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:21:52.923660  500704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:21:52.924236  500704 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-708011" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:21:52.924438  500704 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-708011" cluster setting kubeconfig missing "embed-certs-708011" context setting]
	I1129 10:21:52.924785  500704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:52.926585  500704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:21:52.941602  500704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 10:21:52.941688  500704 kubeadm.go:602] duration metric: took 41.179616ms to restartPrimaryControlPlane
	I1129 10:21:52.941715  500704 kubeadm.go:403] duration metric: took 161.258042ms to StartCluster
	I1129 10:21:52.941772  500704 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:52.941872  500704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:21:52.943006  500704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:21:52.943324  500704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:21:52.943730  500704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:21:52.943815  500704 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-708011"
	I1129 10:21:52.943830  500704 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-708011"
	W1129 10:21:52.943849  500704 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:21:52.943872  500704 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:21:52.944473  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:52.945004  500704 config.go:182] Loaded profile config "embed-certs-708011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:21:52.945102  500704 addons.go:70] Setting dashboard=true in profile "embed-certs-708011"
	I1129 10:21:52.945139  500704 addons.go:239] Setting addon dashboard=true in "embed-certs-708011"
	W1129 10:21:52.945172  500704 addons.go:248] addon dashboard should already be in state true
	I1129 10:21:52.945215  500704 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:21:52.945827  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:52.952406  500704 out.go:179] * Verifying Kubernetes components...
	I1129 10:21:52.953099  500704 addons.go:70] Setting default-storageclass=true in profile "embed-certs-708011"
	I1129 10:21:52.953242  500704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-708011"
	I1129 10:21:52.954496  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:52.958226  500704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:52.995482  500704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:21:52.998370  500704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:21:53.001268  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:21:53.001314  500704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:21:53.001417  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:53.015621  500704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:21:53.018710  500704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:21:53.018736  500704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:21:53.018805  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:53.049993  500704 addons.go:239] Setting addon default-storageclass=true in "embed-certs-708011"
	W1129 10:21:53.050021  500704 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:21:53.050048  500704 host.go:66] Checking if "embed-certs-708011" exists ...
	I1129 10:21:53.050554  500704 cli_runner.go:164] Run: docker container inspect embed-certs-708011 --format={{.State.Status}}
	I1129 10:21:53.078630  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:53.096290  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:53.111370  500704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:21:53.111391  500704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:21:53.111458  500704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-708011
	I1129 10:21:53.139130  500704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/embed-certs-708011/id_rsa Username:docker}
	I1129 10:21:49.602642  501976 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 10:21:49.602866  501976 start.go:159] libmachine.API.Create for "no-preload-949993" (driver="docker")
	I1129 10:21:49.602890  501976 client.go:173] LocalClient.Create starting
	I1129 10:21:49.602963  501976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 10:21:49.602995  501976 main.go:143] libmachine: Decoding PEM data...
	I1129 10:21:49.603014  501976 main.go:143] libmachine: Parsing certificate...
	I1129 10:21:49.603072  501976 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 10:21:49.603096  501976 main.go:143] libmachine: Decoding PEM data...
	I1129 10:21:49.603113  501976 main.go:143] libmachine: Parsing certificate...
	I1129 10:21:49.603463  501976 cli_runner.go:164] Run: docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 10:21:49.628940  501976 cli_runner.go:211] docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 10:21:49.629013  501976 network_create.go:284] running [docker network inspect no-preload-949993] to gather additional debugging logs...
	I1129 10:21:49.629029  501976 cli_runner.go:164] Run: docker network inspect no-preload-949993
	W1129 10:21:49.648358  501976 cli_runner.go:211] docker network inspect no-preload-949993 returned with exit code 1
	I1129 10:21:49.648387  501976 network_create.go:287] error running [docker network inspect no-preload-949993]: docker network inspect no-preload-949993: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-949993 not found
	I1129 10:21:49.648402  501976 network_create.go:289] output of [docker network inspect no-preload-949993]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-949993 not found
	
	** /stderr **
	I1129 10:21:49.648509  501976 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:21:49.675932  501976 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
	I1129 10:21:49.676301  501976 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf66364546bb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1a:25:6d:94:37:dd} reservation:<nil>}
	I1129 10:21:49.676530  501976 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d78444b552f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:b1:d6:7c:04:eb} reservation:<nil>}
	I1129 10:21:49.676992  501976 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7420}
	I1129 10:21:49.677017  501976 network_create.go:124] attempt to create docker network no-preload-949993 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 10:21:49.677078  501976 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-949993 no-preload-949993
	I1129 10:21:49.758803  501976 network_create.go:108] docker network no-preload-949993 192.168.76.0/24 created
	I1129 10:21:49.758834  501976 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-949993" container
	I1129 10:21:49.758910  501976 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 10:21:49.779807  501976 cli_runner.go:164] Run: docker volume create no-preload-949993 --label name.minikube.sigs.k8s.io=no-preload-949993 --label created_by.minikube.sigs.k8s.io=true
	I1129 10:21:49.803912  501976 oci.go:103] Successfully created a docker volume no-preload-949993
	I1129 10:21:49.803990  501976 cli_runner.go:164] Run: docker run --rm --name no-preload-949993-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-949993 --entrypoint /usr/bin/test -v no-preload-949993:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 10:21:50.456928  501976 oci.go:107] Successfully prepared a docker volume no-preload-949993
	I1129 10:21:50.456983  501976 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1129 10:21:50.457114  501976 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 10:21:50.457209  501976 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 10:21:50.539187  501976 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-949993 --name no-preload-949993 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-949993 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-949993 --network no-preload-949993 --ip 192.168.76.2 --volume no-preload-949993:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 10:21:50.914387  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Running}}
	I1129 10:21:50.962661  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:21:50.986141  501976 cli_runner.go:164] Run: docker exec no-preload-949993 stat /var/lib/dpkg/alternatives/iptables
	I1129 10:21:51.056600  501976 oci.go:144] the created container "no-preload-949993" has a running status.
	I1129 10:21:51.056625  501976 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa...
	I1129 10:21:52.186661  501976 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 10:21:52.207615  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:21:52.227840  501976 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 10:21:52.227859  501976 kic_runner.go:114] Args: [docker exec --privileged no-preload-949993 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 10:21:52.288495  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:21:52.315019  501976 machine.go:94] provisionDockerMachine start ...
	I1129 10:21:52.315102  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:52.355912  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:52.356380  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:52.356397  501976 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:21:52.357071  501976 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46092->127.0.0.1:33441: read: connection reset by peer
	I1129 10:21:53.496604  500704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:21:53.515130  500704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:21:53.599764  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:21:53.599786  500704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:21:53.606118  500704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:21:53.669583  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:21:53.669651  500704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:21:53.750386  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:21:53.750414  500704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:21:53.829995  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:21:53.830021  500704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:21:53.846679  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:21:53.846708  500704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:21:53.861141  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:21:53.861169  500704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:21:53.876970  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:21:53.876998  500704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:21:53.891849  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:21:53.891877  500704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:21:53.905837  500704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:21:53.905865  500704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:21:53.924457  500704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:21:55.538714  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:21:55.538787  501976 ubuntu.go:182] provisioning hostname "no-preload-949993"
	I1129 10:21:55.538898  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:55.567685  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:55.568004  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:55.568016  501976 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-949993 && echo "no-preload-949993" | sudo tee /etc/hostname
	I1129 10:21:55.776979  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:21:55.777150  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:55.810413  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:55.810777  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:55.810801  501976 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-949993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-949993/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-949993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:21:56.019163  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:21:56.019191  501976 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:21:56.019221  501976 ubuntu.go:190] setting up certificates
	I1129 10:21:56.019231  501976 provision.go:84] configureAuth start
	I1129 10:21:56.019301  501976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:21:56.051767  501976 provision.go:143] copyHostCerts
	I1129 10:21:56.051863  501976 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:21:56.051880  501976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:21:56.051962  501976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:21:56.052082  501976 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:21:56.052093  501976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:21:56.052125  501976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:21:56.052198  501976 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:21:56.052209  501976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:21:56.052236  501976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:21:56.052305  501976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.no-preload-949993 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-949993]
	I1129 10:21:56.570023  501976 provision.go:177] copyRemoteCerts
	I1129 10:21:56.570117  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:21:56.570170  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:56.611789  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:56.727463  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:21:56.761273  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:21:56.804014  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 10:21:56.836897  501976 provision.go:87] duration metric: took 817.652726ms to configureAuth
	I1129 10:21:56.836975  501976 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:21:56.837223  501976 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:21:56.837373  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:56.874226  501976 main.go:143] libmachine: Using SSH client type: native
	I1129 10:21:56.874530  501976 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1129 10:21:56.874546  501976 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:21:57.286460  501976 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:21:57.286494  501976 machine.go:97] duration metric: took 4.971457554s to provisionDockerMachine
	I1129 10:21:57.286505  501976 client.go:176] duration metric: took 7.683609067s to LocalClient.Create
	I1129 10:21:57.286519  501976 start.go:167] duration metric: took 7.683654401s to libmachine.API.Create "no-preload-949993"
	I1129 10:21:57.286526  501976 start.go:293] postStartSetup for "no-preload-949993" (driver="docker")
	I1129 10:21:57.286551  501976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:21:57.286623  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:21:57.286678  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.308367  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.424909  501976 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:21:57.430750  501976 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:21:57.430779  501976 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:21:57.430799  501976 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:21:57.430859  501976 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:21:57.430953  501976 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:21:57.431069  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:21:57.444260  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:21:57.483785  501976 start.go:296] duration metric: took 197.243744ms for postStartSetup
	I1129 10:21:57.484232  501976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:21:57.509021  501976 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:21:57.509357  501976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:21:57.509407  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.540658  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.670473  501976 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:21:57.678696  501976 start.go:128] duration metric: took 8.07940633s to createHost
	I1129 10:21:57.678735  501976 start.go:83] releasing machines lock for "no-preload-949993", held for 8.07954097s
	I1129 10:21:57.678819  501976 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:21:57.707733  501976 ssh_runner.go:195] Run: cat /version.json
	I1129 10:21:57.707797  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.708033  501976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:21:57.708096  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:21:57.750881  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.759861  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:21:57.870070  501976 ssh_runner.go:195] Run: systemctl --version
	I1129 10:21:57.997357  501976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:21:58.089365  501976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:21:58.094857  501976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:21:58.094971  501976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:21:58.144844  501976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 10:21:58.144921  501976 start.go:496] detecting cgroup driver to use...
	I1129 10:21:58.144962  501976 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:21:58.145064  501976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:21:58.177507  501976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:21:58.194733  501976 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:21:58.194830  501976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:21:58.227319  501976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:21:58.248640  501976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:21:58.467119  501976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:21:58.684578  501976 docker.go:234] disabling docker service ...
	I1129 10:21:58.684702  501976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:21:58.728358  501976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:21:58.753439  501976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:21:58.975612  501976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:21:59.194680  501976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:21:59.224249  501976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:21:59.252517  501976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:21:59.252634  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.275696  501976 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:21:59.275816  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.290540  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.308804  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.325262  501976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:21:59.339436  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.355482  501976 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.380780  501976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:21:59.392850  501976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:21:59.405619  501976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:21:59.413931  501976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:21:59.633734  501976 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:21:59.871391  501976 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:21:59.871541  501976 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:21:59.882902  501976 start.go:564] Will wait 60s for crictl version
	I1129 10:21:59.883018  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:21:59.890737  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:21:59.939200  501976 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:21:59.939357  501976 ssh_runner.go:195] Run: crio --version
	I1129 10:22:00.004861  501976 ssh_runner.go:195] Run: crio --version
	I1129 10:22:00.179566  501976 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:21:59.110602  500704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.613922853s)
	I1129 10:21:59.110959  500704 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.595806777s)
	I1129 10:21:59.110991  500704 node_ready.go:35] waiting up to 6m0s for node "embed-certs-708011" to be "Ready" ...
	I1129 10:21:59.422567  500704 node_ready.go:49] node "embed-certs-708011" is "Ready"
	I1129 10:21:59.422651  500704 node_ready.go:38] duration metric: took 311.639365ms for node "embed-certs-708011" to be "Ready" ...
	I1129 10:21:59.422682  500704 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:21:59.422750  500704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:22:01.482517  500704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.876368423s)
	I1129 10:22:01.673929  500704 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.25111642s)
	I1129 10:22:01.673960  500704 api_server.go:72] duration metric: took 8.730573624s to wait for apiserver process to appear ...
	I1129 10:22:01.673966  500704 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:22:01.673984  500704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:22:01.674825  500704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.750328082s)
	I1129 10:22:01.677915  500704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-708011 addons enable metrics-server
	
	I1129 10:22:01.680759  500704 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1129 10:22:01.683785  500704 addons.go:530] duration metric: took 8.74005559s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1129 10:22:01.686877  500704 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:22:01.686904  500704 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:22:02.174180  500704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 10:22:02.195628  500704 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 10:22:02.198419  500704 api_server.go:141] control plane version: v1.34.1
	I1129 10:22:02.198443  500704 api_server.go:131] duration metric: took 524.470969ms to wait for apiserver health ...
	I1129 10:22:02.198452  500704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:22:02.209282  500704 system_pods.go:59] 8 kube-system pods found
	I1129 10:22:02.209321  500704 system_pods.go:61] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:22:02.209331  500704 system_pods.go:61] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:22:02.209336  500704 system_pods.go:61] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:22:02.209344  500704 system_pods.go:61] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:22:02.209352  500704 system_pods.go:61] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:22:02.209356  500704 system_pods.go:61] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:22:02.209362  500704 system_pods.go:61] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:22:02.209366  500704 system_pods.go:61] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Running
	I1129 10:22:02.209373  500704 system_pods.go:74] duration metric: took 10.91527ms to wait for pod list to return data ...
	I1129 10:22:02.209382  500704 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:22:02.215389  500704 default_sa.go:45] found service account: "default"
	I1129 10:22:02.215472  500704 default_sa.go:55] duration metric: took 6.083608ms for default service account to be created ...
	I1129 10:22:02.215498  500704 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:22:02.226434  500704 system_pods.go:86] 8 kube-system pods found
	I1129 10:22:02.226522  500704 system_pods.go:89] "coredns-66bc5c9577-5frc4" [708179d5-3a6c-457c-8c3a-32e60b0ec8d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:22:02.226548  500704 system_pods.go:89] "etcd-embed-certs-708011" [a4949097-376a-4ead-b834-1e921dd2e7d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:22:02.226589  500704 system_pods.go:89] "kindnet-wfvvz" [ff138410-a3cc-4e8e-a66c-dcbcf88b738c] Running
	I1129 10:22:02.226616  500704 system_pods.go:89] "kube-apiserver-embed-certs-708011" [9b34d3cf-7e73-48b1-89dd-bbed604f1a58] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:22:02.226638  500704 system_pods.go:89] "kube-controller-manager-embed-certs-708011" [e8602920-337f-4074-99d7-e71ea7e754c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:22:02.226675  500704 system_pods.go:89] "kube-proxy-phs6g" [84396f86-dd6d-48d7-9b5b-49ebf273f71b] Running
	I1129 10:22:02.226701  500704 system_pods.go:89] "kube-scheduler-embed-certs-708011" [477c7647-34d5-4144-a2e1-5c639fdadc84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:22:02.226732  500704 system_pods.go:89] "storage-provisioner" [ca33c340-0e42-4780-bf32-d1e48f79705f] Running
	I1129 10:22:02.226768  500704 system_pods.go:126] duration metric: took 11.250954ms to wait for k8s-apps to be running ...
	I1129 10:22:02.226795  500704 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:22:02.226923  500704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:02.264284  500704 system_svc.go:56] duration metric: took 37.480833ms WaitForService to wait for kubelet
	I1129 10:22:02.264365  500704 kubeadm.go:587] duration metric: took 9.320977025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:22:02.264399  500704 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:22:02.284816  500704 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:22:02.284899  500704 node_conditions.go:123] node cpu capacity is 2
	I1129 10:22:02.284926  500704 node_conditions.go:105] duration metric: took 20.490406ms to run NodePressure ...
	I1129 10:22:02.284965  500704 start.go:242] waiting for startup goroutines ...
	I1129 10:22:02.284988  500704 start.go:247] waiting for cluster config update ...
	I1129 10:22:02.285012  500704 start.go:256] writing updated cluster config ...
	I1129 10:22:02.285307  500704 ssh_runner.go:195] Run: rm -f paused
	I1129 10:22:02.291454  500704 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:22:02.304732  500704 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5frc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:00.182800  501976 cli_runner.go:164] Run: docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:22:00.226663  501976 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:22:00.232923  501976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:22:00.251332  501976 kubeadm.go:884] updating cluster {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:22:00.251462  501976 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:22:00.251522  501976 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:22:00.326610  501976 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 10:22:00.326643  501976 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 10:22:00.326719  501976 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.326989  501976 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.327088  501976 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.327186  501976 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.327675  501976 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.327828  501976 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 10:22:00.328750  501976 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:00.328950  501976 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.329447  501976 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.330237  501976 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.330503  501976 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.330666  501976 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.330819  501976 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.331774  501976 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.331964  501976 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 10:22:00.332429  501976 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:00.657513  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1129 10:22:00.662898  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.669935  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.676141  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.681518  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.685133  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.694651  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.832698  501976 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1129 10:22:00.832790  501976 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 10:22:00.832861  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.928107  501976 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1129 10:22:00.928214  501976 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.928301  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.969603  501976 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1129 10:22:00.969704  501976 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.969785  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.969910  501976 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1129 10:22:00.969985  501976 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:00.970071  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.970016  501976 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1129 10:22:00.970229  501976 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.970299  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.984316  501976 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1129 10:22:00.984359  501976 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:00.984418  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.984495  501976 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1129 10:22:00.984515  501976 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:00.984543  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:00.984624  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 10:22:00.984684  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:00.996003  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:00.996075  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:00.996138  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:01.162782  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:01.162864  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:01.162929  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 10:22:01.162986  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:01.167581  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:01.167661  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:01.167712  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:01.399625  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 10:22:01.399707  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:01.399762  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:01.402185  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 10:22:01.402315  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 10:22:01.402371  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 10:22:01.402441  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 10:22:01.587929  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 10:22:01.588009  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1129 10:22:01.588084  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 10:22:01.588159  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 10:22:01.638360  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 10:22:01.638473  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 10:22:01.638536  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 10:22:01.638600  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 10:22:01.638698  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 10:22:01.638754  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 10:22:01.638815  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1129 10:22:01.638862  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 10:22:01.716607  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 10:22:01.716707  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 10:22:01.716783  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 10:22:01.716834  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 10:22:01.716885  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 10:22:01.716901  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1129 10:22:01.716958  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 10:22:01.716973  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1129 10:22:01.717011  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 10:22:01.717023  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1129 10:22:01.717063  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 10:22:01.717076  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1129 10:22:01.717125  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 10:22:01.717140  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	W1129 10:22:01.730483  501976 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1129 10:22:01.730684  501976 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:01.805232  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 10:22:01.805284  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1129 10:22:01.805365  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 10:22:01.805389  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1129 10:22:01.902685  501976 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 10:22:01.902806  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1129 10:22:02.068833  501976 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1129 10:22:02.068883  501976 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:02.068951  501976 ssh_runner.go:195] Run: which crictl
	I1129 10:22:02.496317  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:02.496425  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1129 10:22:02.725614  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:02.745151  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 10:22:02.745230  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 10:22:02.954329  501976 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1129 10:22:04.337825  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:06.818418  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:05.036367  501976 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.082000864s)
	I1129 10:22:05.036413  501976 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 10:22:05.036495  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1129 10:22:05.036603  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.291355111s)
	I1129 10:22:05.036629  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 10:22:05.036680  501976 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 10:22:05.036759  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1129 10:22:07.608967  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.572168293s)
	I1129 10:22:07.609044  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 10:22:07.609077  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 10:22:07.609158  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 10:22:07.609271  501976 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.572765822s)
	I1129 10:22:07.609305  501976 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 10:22:07.609355  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1129 10:22:09.211271  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.602072971s)
	I1129 10:22:09.211295  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 10:22:09.211311  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 10:22:09.211359  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1129 10:22:09.315635  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:11.370342  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:11.084382  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.873000883s)
	I1129 10:22:11.084412  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 10:22:11.084431  501976 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 10:22:11.084491  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 10:22:13.305263  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.220747528s)
	I1129 10:22:13.305291  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 10:22:13.305308  501976 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 10:22:13.305356  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1129 10:22:13.810151  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:15.811060  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:17.817233  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:18.254033  501976 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.948651878s)
	I1129 10:22:18.254066  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 10:22:18.254121  501976 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 10:22:18.254198  501976 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1129 10:22:18.921653  501976 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 10:22:18.921692  501976 cache_images.go:125] Successfully loaded all cached images
	I1129 10:22:18.921698  501976 cache_images.go:94] duration metric: took 18.595042475s to LoadCachedImages
	I1129 10:22:18.921711  501976 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:22:18.921799  501976 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-949993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:22:18.921907  501976 ssh_runner.go:195] Run: crio config
	I1129 10:22:19.007982  501976 cni.go:84] Creating CNI manager for ""
	I1129 10:22:19.008059  501976 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:22:19.008092  501976 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:22:19.008152  501976 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-949993 NodeName:no-preload-949993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:22:19.008336  501976 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-949993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:22:19.008467  501976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:22:19.017281  501976 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 10:22:19.017371  501976 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 10:22:19.026274  501976 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1129 10:22:19.026408  501976 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256
	I1129 10:22:19.026461  501976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:22:19.026551  501976 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256
	I1129 10:22:19.026581  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 10:22:19.026616  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 10:22:19.032033  501976 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 10:22:19.032067  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1129 10:22:19.051436  501976 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 10:22:19.051474  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1129 10:22:19.051613  501976 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 10:22:19.072940  501976 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 10:22:19.073020  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1129 10:22:19.962533  501976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:22:19.970344  501976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:22:19.985716  501976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:22:20.001389  501976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 10:22:20.023047  501976 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:22:20.027652  501976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:22:20.041950  501976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:22:20.168522  501976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:22:20.185858  501976 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993 for IP: 192.168.76.2
	I1129 10:22:20.185877  501976 certs.go:195] generating shared ca certs ...
	I1129 10:22:20.185894  501976 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.186040  501976 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:22:20.186125  501976 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:22:20.186134  501976 certs.go:257] generating profile certs ...
	I1129 10:22:20.186198  501976 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key
	I1129 10:22:20.186214  501976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt with IP's: []
	I1129 10:22:20.463083  501976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt ...
	I1129 10:22:20.463123  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt: {Name:mk4b581f7eb26bf54bbcc9fff9bb33d1486cf7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.463362  501976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key ...
	I1129 10:22:20.463378  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key: {Name:mk714e8a10132529e0b91fcdae06d626fc7556e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.463485  501976 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f
	I1129 10:22:20.463506  501976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 10:22:20.586570  501976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f ...
	I1129 10:22:20.586603  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f: {Name:mkf7d78d1b942aedb1b07bbb205304740db88aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.586797  501976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f ...
	I1129 10:22:20.586811  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f: {Name:mkf0d6e044f009e6ce32172ee6072ce3909aa312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.586897  501976 certs.go:382] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt.e0168a5f -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt
	I1129 10:22:20.586980  501976 certs.go:386] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key
	I1129 10:22:20.587045  501976 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key
	I1129 10:22:20.587063  501976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt with IP's: []
	I1129 10:22:20.885913  501976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt ...
	I1129 10:22:20.885946  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt: {Name:mka3ee1b3704a5d22582f4d70df1101ba6dea36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.886158  501976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key ...
	I1129 10:22:20.886181  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key: {Name:mk5fdf5d105235ce3b0a3b4223d2de2ec844c566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:20.886386  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:22:20.886435  501976 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:22:20.886449  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:22:20.886478  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:22:20.886513  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:22:20.886543  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:22:20.886591  501976 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:22:20.887144  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:22:20.905564  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:22:20.929584  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:22:20.950528  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:22:20.970909  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:22:20.995523  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:22:21.015657  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:22:21.033943  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:22:21.052584  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:22:21.071956  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:22:21.089621  501976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:22:21.107728  501976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:22:21.121184  501976 ssh_runner.go:195] Run: openssl version
	I1129 10:22:21.130633  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:22:21.141405  501976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:22:21.145477  501976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:22:21.145542  501976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:22:21.189324  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:22:21.198387  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:22:21.206520  501976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:22:21.210480  501976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:22:21.210574  501976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:22:21.251358  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:22:21.259818  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:22:21.268054  501976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:22:21.271899  501976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:22:21.271966  501976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:22:21.312897  501976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:22:21.321443  501976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:22:21.325218  501976 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 10:22:21.325320  501976 kubeadm.go:401] StartCluster: {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:22:21.325406  501976 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:22:21.325484  501976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:22:21.356374  501976 cri.go:89] found id: ""
	I1129 10:22:21.356554  501976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:22:21.364645  501976 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 10:22:21.372614  501976 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 10:22:21.372714  501976 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 10:22:21.381024  501976 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 10:22:21.381086  501976 kubeadm.go:158] found existing configuration files:
	
	I1129 10:22:21.381146  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 10:22:21.389030  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 10:22:21.389123  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 10:22:21.396579  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 10:22:21.404825  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 10:22:21.404893  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 10:22:21.412406  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 10:22:21.420415  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 10:22:21.420538  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 10:22:21.428777  501976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 10:22:21.436764  501976 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 10:22:21.436865  501976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 10:22:21.444387  501976 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 10:22:21.483202  501976 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 10:22:21.483447  501976 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 10:22:21.509939  501976 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 10:22:21.510037  501976 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 10:22:21.510142  501976 kubeadm.go:319] OS: Linux
	I1129 10:22:21.510212  501976 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 10:22:21.510275  501976 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 10:22:21.510337  501976 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 10:22:21.510405  501976 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 10:22:21.510467  501976 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 10:22:21.510534  501976 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 10:22:21.510594  501976 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 10:22:21.510648  501976 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 10:22:21.510709  501976 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 10:22:21.590613  501976 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 10:22:21.590744  501976 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 10:22:21.590851  501976 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 10:22:21.614845  501976 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1129 10:22:19.830446  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:22.319382  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:21.621800  501976 out.go:252]   - Generating certificates and keys ...
	I1129 10:22:21.621918  501976 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 10:22:21.621998  501976 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 10:22:21.954032  501976 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 10:22:22.810720  501976 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 10:22:23.091065  501976 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 10:22:23.261826  501976 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 10:22:23.316753  501976 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 10:22:23.317046  501976 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-949993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 10:22:23.723567  501976 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 10:22:23.723986  501976 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-949993] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 10:22:23.857951  501976 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 10:22:24.238097  501976 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 10:22:24.504202  501976 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:22:24.504501  501976 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:22:24.811358  501976 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:22:25.289001  501976 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 10:22:26.403623  501976 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:22:27.147284  501976 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:22:27.762029  501976 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:22:27.762626  501976 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:22:27.766720  501976 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1129 10:22:24.814129  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:27.341823  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:27.770195  501976 out.go:252]   - Booting up control plane ...
	I1129 10:22:27.770304  501976 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:22:27.770388  501976 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:22:27.771511  501976 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:22:27.789358  501976 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:22:27.789468  501976 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 10:22:27.797073  501976 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 10:22:27.797405  501976 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:22:27.797451  501976 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:22:27.940798  501976 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 10:22:27.940920  501976 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1129 10:22:29.810759  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:32.317582  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:29.941179  501976 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000710001s
	I1129 10:22:29.945156  501976 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 10:22:29.945547  501976 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 10:22:29.945892  501976 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 10:22:29.946627  501976 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 10:22:33.446553  501976 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.49970414s
	I1129 10:22:35.535831  501976 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.588752495s
	I1129 10:22:36.448320  501976 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50192286s
	I1129 10:22:36.472744  501976 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:22:36.489204  501976 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:22:36.505277  501976 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:22:36.505483  501976 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-949993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:22:36.517801  501976 kubeadm.go:319] [bootstrap-token] Using token: l5g4yc.1v5lim4xatob3w56
	I1129 10:22:36.520777  501976 out.go:252]   - Configuring RBAC rules ...
	I1129 10:22:36.520905  501976 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:22:36.528179  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:22:36.539590  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:22:36.547488  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:22:36.551621  501976 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:22:36.558681  501976 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:22:36.857162  501976 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:22:37.336941  501976 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:22:37.855570  501976 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:22:37.856863  501976 kubeadm.go:319] 
	I1129 10:22:37.856941  501976 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:22:37.856952  501976 kubeadm.go:319] 
	I1129 10:22:37.857030  501976 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:22:37.857037  501976 kubeadm.go:319] 
	I1129 10:22:37.857062  501976 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:22:37.857130  501976 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:22:37.857184  501976 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:22:37.857191  501976 kubeadm.go:319] 
	I1129 10:22:37.857245  501976 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:22:37.857253  501976 kubeadm.go:319] 
	I1129 10:22:37.857309  501976 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:22:37.857315  501976 kubeadm.go:319] 
	I1129 10:22:37.857366  501976 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:22:37.857449  501976 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:22:37.857522  501976 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:22:37.857529  501976 kubeadm.go:319] 
	I1129 10:22:37.857620  501976 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:22:37.857699  501976 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:22:37.857707  501976 kubeadm.go:319] 
	I1129 10:22:37.857791  501976 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token l5g4yc.1v5lim4xatob3w56 \
	I1129 10:22:37.857896  501976 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:22:37.857921  501976 kubeadm.go:319] 	--control-plane 
	I1129 10:22:37.857928  501976 kubeadm.go:319] 
	I1129 10:22:37.858014  501976 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:22:37.858022  501976 kubeadm.go:319] 
	I1129 10:22:37.858135  501976 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token l5g4yc.1v5lim4xatob3w56 \
	I1129 10:22:37.858250  501976 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:22:37.863120  501976 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 10:22:37.863353  501976 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:22:37.863476  501976 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:22:37.863501  501976 cni.go:84] Creating CNI manager for ""
	I1129 10:22:37.863510  501976 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:22:37.866561  501976 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1129 10:22:34.812769  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	W1129 10:22:37.319023  500704 pod_ready.go:104] pod "coredns-66bc5c9577-5frc4" is not "Ready", error: <nil>
	I1129 10:22:37.869450  501976 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:22:37.876313  501976 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 10:22:37.876334  501976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:22:37.897975  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:22:38.210160  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:38.210283  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-949993 minikube.k8s.io/updated_at=2025_11_29T10_22_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=no-preload-949993 minikube.k8s.io/primary=true
	I1129 10:22:38.210044  501976 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:22:38.556793  501976 ops.go:34] apiserver oom_adj: -16
	I1129 10:22:38.556923  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:39.057048  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:38.810477  500704 pod_ready.go:94] pod "coredns-66bc5c9577-5frc4" is "Ready"
	I1129 10:22:38.810506  500704 pod_ready.go:86] duration metric: took 36.505705488s for pod "coredns-66bc5c9577-5frc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.813348  500704 pod_ready.go:83] waiting for pod "etcd-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.818383  500704 pod_ready.go:94] pod "etcd-embed-certs-708011" is "Ready"
	I1129 10:22:38.818457  500704 pod_ready.go:86] duration metric: took 5.083334ms for pod "etcd-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.820731  500704 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.825624  500704 pod_ready.go:94] pod "kube-apiserver-embed-certs-708011" is "Ready"
	I1129 10:22:38.825658  500704 pod_ready.go:86] duration metric: took 4.846769ms for pod "kube-apiserver-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:38.828449  500704 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.008800  500704 pod_ready.go:94] pod "kube-controller-manager-embed-certs-708011" is "Ready"
	I1129 10:22:39.008831  500704 pod_ready.go:86] duration metric: took 180.356573ms for pod "kube-controller-manager-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.207960  500704 pod_ready.go:83] waiting for pod "kube-proxy-phs6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.608135  500704 pod_ready.go:94] pod "kube-proxy-phs6g" is "Ready"
	I1129 10:22:39.608166  500704 pod_ready.go:86] duration metric: took 400.176887ms for pod "kube-proxy-phs6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:39.808750  500704 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:40.208519  500704 pod_ready.go:94] pod "kube-scheduler-embed-certs-708011" is "Ready"
	I1129 10:22:40.208560  500704 pod_ready.go:86] duration metric: took 399.740049ms for pod "kube-scheduler-embed-certs-708011" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:22:40.208574  500704 pod_ready.go:40] duration metric: took 37.91704367s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:22:40.266261  500704 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:22:40.271290  500704 out.go:179] * Done! kubectl is now configured to use "embed-certs-708011" cluster and "default" namespace by default
	I1129 10:22:39.557113  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:40.057492  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:40.557707  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:41.056992  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:41.557331  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:42.057866  501976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:22:42.253622  501976 kubeadm.go:1114] duration metric: took 4.043540405s to wait for elevateKubeSystemPrivileges
	I1129 10:22:42.253650  501976 kubeadm.go:403] duration metric: took 20.92833643s to StartCluster
	I1129 10:22:42.253668  501976 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:42.253730  501976 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:22:42.255371  501976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:22:42.255635  501976 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:22:42.255862  501976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:22:42.256167  501976 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:22:42.256214  501976 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:22:42.256277  501976 addons.go:70] Setting storage-provisioner=true in profile "no-preload-949993"
	I1129 10:22:42.256292  501976 addons.go:239] Setting addon storage-provisioner=true in "no-preload-949993"
	I1129 10:22:42.256373  501976 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:22:42.256953  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:22:42.258295  501976 addons.go:70] Setting default-storageclass=true in profile "no-preload-949993"
	I1129 10:22:42.258332  501976 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-949993"
	I1129 10:22:42.258662  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:22:42.259364  501976 out.go:179] * Verifying Kubernetes components...
	I1129 10:22:42.262237  501976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:22:42.296552  501976 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:22:42.301361  501976 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:22:42.301389  501976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:22:42.301478  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:22:42.303972  501976 addons.go:239] Setting addon default-storageclass=true in "no-preload-949993"
	I1129 10:22:42.304015  501976 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:22:42.304452  501976 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:22:42.343585  501976 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:22:42.343624  501976 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:22:42.343711  501976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:22:42.354263  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:22:42.387880  501976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:22:42.648189  501976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:22:42.663978  501976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:22:42.693219  501976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:22:42.724420  501976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:22:43.375159  501976 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 10:22:43.376313  501976 node_ready.go:35] waiting up to 6m0s for node "no-preload-949993" to be "Ready" ...
	I1129 10:22:43.427029  501976 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 10:22:43.429985  501976 addons.go:530] duration metric: took 1.173754136s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 10:22:43.880661  501976 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-949993" context rescaled to 1 replicas
	W1129 10:22:45.381091  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	W1129 10:22:47.381541  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	W1129 10:22:49.381841  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	W1129 10:22:51.882950  501976 node_ready.go:57] node "no-preload-949993" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.43810367Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.444124967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.444221862Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.444272054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.448485232Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.448522795Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.448549881Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.453767041Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.45392747Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.454009891Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.459389885Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:22:40 embed-certs-708011 crio[661]: time="2025-11-29T10:22:40.459579114Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.99288517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a6a1c89b-e76b-40d3-94f5-529ac8ea3072 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.993910733Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f80849d6-c017-4a64-8185-336d6189cca3 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.995233817Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper" id=3dd0066a-1670-4d19-8c53-0b69c57143ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:22:47 embed-certs-708011 crio[661]: time="2025-11-29T10:22:47.995324362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.008719621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.009357101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.039331263Z" level=info msg="Created container 05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper" id=3dd0066a-1670-4d19-8c53-0b69c57143ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.040125693Z" level=info msg="Starting container: 05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025" id=bddc51c4-d854-4c10-97fc-9b88d525a7c3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.047289691Z" level=info msg="Started container" PID=1746 containerID=05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper id=bddc51c4-d854-4c10-97fc-9b88d525a7c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0bf9fd51cf12f24ba39875c9f7bc3724d93bb93b8f7b1162d93ee2d387d3022f
	Nov 29 10:22:48 embed-certs-708011 conmon[1744]: conmon 05c5efef0f56ea5f8f85 <ninfo>: container 1746 exited with status 1
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.580943573Z" level=info msg="Removing container: 275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c" id=9e5fbe66-99e5-4ae2-9087-d9e554d18a3c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.591894069Z" level=info msg="Error loading conmon cgroup of container 275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c: cgroup deleted" id=9e5fbe66-99e5-4ae2-9087-d9e554d18a3c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:22:48 embed-certs-708011 crio[661]: time="2025-11-29T10:22:48.595152142Z" level=info msg="Removed container 275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz/dashboard-metrics-scraper" id=9e5fbe66-99e5-4ae2-9087-d9e554d18a3c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	05c5efef0f56e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   0bf9fd51cf12f       dashboard-metrics-scraper-6ffb444bf9-2q2nz   kubernetes-dashboard
	52238fd1860d9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   ac59ac41f805f       storage-provisioner                          kube-system
	4c2d7f74c7191       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   9d50b825c1851       kubernetes-dashboard-855c9754f9-7sxs9        kubernetes-dashboard
	015314d00d4b2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   12607321302e1       coredns-66bc5c9577-5frc4                     kube-system
	26abb8ba87208       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   69fe5fcb37fe6       busybox                                      default
	826e09f43d9ea       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   e1648b5370677       kube-proxy-phs6g                             kube-system
	b95113cddf489       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   f4f71385e2664       kindnet-wfvvz                                kube-system
	e1fb3814acf19       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   ac59ac41f805f       storage-provisioner                          kube-system
	727bfd303dcd3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   68cb9ddee48af       kube-controller-manager-embed-certs-708011   kube-system
	adedb317fa6de       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4f3ea82e0d3d8       etcd-embed-certs-708011                      kube-system
	2aaa1ea4482b2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   7c4733c7e63c7       kube-apiserver-embed-certs-708011            kube-system
	2ef08f65fca11       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0e5fd67fe1657       kube-scheduler-embed-certs-708011            kube-system
	
	
	==> coredns [015314d00d4b2a6f4bfa3c12ab4ce66f4a0c69af043fdc307314c31e97739e05] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51282 - 49691 "HINFO IN 472883754259937002.1044523959517425734. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00447586s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-708011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-708011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-708011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_20_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:20:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-708011
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:22:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:20:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:22:29 +0000   Sat, 29 Nov 2025 10:21:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-708011
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1f99ece5-e15d-4bbe-acc3-9db5d863dc89
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-5frc4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-embed-certs-708011                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-wfvvz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-708011             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-embed-certs-708011    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-phs6g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-708011             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2q2nz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7sxs9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m29s                  kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m29s                  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m29s                  kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m25s                  node-controller  Node embed-certs-708011 event: Registered Node embed-certs-708011 in Controller
	  Normal   NodeReady                103s                   kubelet          Node embed-certs-708011 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 67s)      kubelet          Node embed-certs-708011 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 67s)      kubelet          Node embed-certs-708011 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 67s)      kubelet          Node embed-certs-708011 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node embed-certs-708011 event: Registered Node embed-certs-708011 in Controller
	
	
	==> dmesg <==
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [adedb317fa6ded4dccd2df4734b6d20b491a0f67cf474ed309d88012e548e457] <==
	{"level":"warn","ts":"2025-11-29T10:21:55.998214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.024280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.067368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.102426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.146302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.183553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.220579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.289413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.327481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.403245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.446304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.475183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.524015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.562609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.610157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.667948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.700769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.766883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.832686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.894616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:56.989492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.021665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.055074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.086997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:21:57.199550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33070","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:22:58 up  3:05,  0 user,  load average: 4.39, 3.32, 2.60
	Linux embed-certs-708011 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b95113cddf48914a845a48ab3e34e7b56e9e981136414866952a96b8bd38b29c] <==
	I1129 10:22:00.076514       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:22:00.136609       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:22:00.136796       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:22:00.136812       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:22:00.136825       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:22:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:22:00.428636       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:22:00.428666       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:22:00.428677       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:22:00.429396       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:22:30.429295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:22:30.429414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:22:30.429732       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 10:22:30.429853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1129 10:22:31.729365       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:22:31.729463       1 metrics.go:72] Registering metrics
	I1129 10:22:31.735660       1 controller.go:711] "Syncing nftables rules"
	I1129 10:22:40.429844       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:22:40.429989       1 main.go:301] handling current node
	I1129 10:22:50.429085       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:22:50.429119       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2aaa1ea4482b2b93ec385bd586d3a93575baed71d36abd5df969b842ac5f01a7] <==
	I1129 10:21:58.791408       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 10:21:58.791415       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:21:58.799585       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:21:58.804566       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 10:21:58.822376       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:21:58.822732       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:21:58.868101       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 10:21:58.891497       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:21:58.892025       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:21:58.904107       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:21:58.904686       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1129 10:21:58.944948       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:21:58.963479       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 10:21:58.964585       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:21:59.136843       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:21:59.509299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:22:00.763666       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:22:01.040673       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:22:01.200000       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:22:01.270542       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:22:01.602420       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.104"}
	I1129 10:22:01.659773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.15.98"}
	I1129 10:22:03.747587       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:22:04.125716       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:22:04.303505       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [727bfd303dcd33dd3dc3af40c533b3bdbad08c92dddb8ac1ae94569d7ffb8cbe] <==
	I1129 10:22:03.661322       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:22:03.663701       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:22:03.663743       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:22:03.663754       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:22:03.670721       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 10:22:03.671324       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:22:03.681793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:22:03.681924       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 10:22:03.682025       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 10:22:03.682100       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 10:22:03.684307       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 10:22:03.684384       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 10:22:03.687933       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:22:03.687968       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:22:03.688168       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:22:03.688191       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:22:03.689423       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:22:03.690897       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:22:03.693178       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:22:03.701191       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 10:22:03.706457       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 10:22:03.716827       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:22:03.718069       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:22:04.338047       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1129 10:22:04.338162       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [826e09f43d9eac658b0cdc43a8652be4cf6343ebad98975ea0ab65ac30ac2604] <==
	I1129 10:22:01.788537       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:22:01.966801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:22:02.095754       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:22:02.095858       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 10:22:02.095954       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:22:02.139561       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:22:02.139622       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:22:02.151494       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:22:02.151943       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:22:02.154441       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:22:02.156873       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:22:02.156892       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:22:02.157246       1 config.go:200] "Starting service config controller"
	I1129 10:22:02.157254       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:22:02.157551       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:22:02.157559       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:22:02.158026       1 config.go:309] "Starting node config controller"
	I1129 10:22:02.158047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:22:02.158056       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:22:02.264608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 10:22:02.264756       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:22:02.331393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2ef08f65fca11393f21d49612e025ab68e723121e34951983755e6c3f8c5032d] <==
	I1129 10:21:55.745920       1 serving.go:386] Generated self-signed cert in-memory
	I1129 10:22:01.958231       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:22:01.958265       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:22:01.968106       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 10:22:01.968219       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 10:22:01.968319       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:22:01.972598       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:22:01.968343       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:22:01.968378       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:22:01.968392       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:22:01.973075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:22:02.070045       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1129 10:22:02.077242       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:22:02.079035       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:22:05 embed-certs-708011 kubelet[791]: W1129 10:22:05.033968     791 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6641e3603d1622574c1c9c1d0020a4c23a8d24683db072132f5b960a45a691a/crio-9d50b825c1851713c9bfd41a23a261f1c51d7f6dda64faa3496b80de0b69578f WatchSource:0}: Error finding container 9d50b825c1851713c9bfd41a23a261f1c51d7f6dda64faa3496b80de0b69578f: Status 404 returned error can't find the container with id 9d50b825c1851713c9bfd41a23a261f1c51d7f6dda64faa3496b80de0b69578f
	Nov 29 10:22:08 embed-certs-708011 kubelet[791]: I1129 10:22:08.412619     791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 10:22:11 embed-certs-708011 kubelet[791]: I1129 10:22:11.460796     791 scope.go:117] "RemoveContainer" containerID="cea0412603481947e86ae366b5fb596021f5da5ba858ad77023a74639cfaad43"
	Nov 29 10:22:12 embed-certs-708011 kubelet[791]: I1129 10:22:12.459596     791 scope.go:117] "RemoveContainer" containerID="cea0412603481947e86ae366b5fb596021f5da5ba858ad77023a74639cfaad43"
	Nov 29 10:22:12 embed-certs-708011 kubelet[791]: I1129 10:22:12.459914     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:12 embed-certs-708011 kubelet[791]: E1129 10:22:12.460096     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:13 embed-certs-708011 kubelet[791]: I1129 10:22:13.465647     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:13 embed-certs-708011 kubelet[791]: E1129 10:22:13.465812     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:14 embed-certs-708011 kubelet[791]: I1129 10:22:14.690901     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:14 embed-certs-708011 kubelet[791]: E1129 10:22:14.691103     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:26 embed-certs-708011 kubelet[791]: I1129 10:22:26.991626     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: I1129 10:22:27.516085     791 scope.go:117] "RemoveContainer" containerID="235acaaad315daa8cf8fa076f40b6ce8419fb84a9910ba4ac20b642ea890ec55"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: I1129 10:22:27.516807     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: E1129 10:22:27.523755     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:27 embed-certs-708011 kubelet[791]: I1129 10:22:27.548271     791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7sxs9" podStartSLOduration=11.130343447 podStartE2EDuration="23.548251163s" podCreationTimestamp="2025-11-29 10:22:04 +0000 UTC" firstStartedPulling="2025-11-29 10:22:05.039002204 +0000 UTC m=+13.501063780" lastFinishedPulling="2025-11-29 10:22:17.456909912 +0000 UTC m=+25.918971496" observedRunningTime="2025-11-29 10:22:18.522024523 +0000 UTC m=+26.984086107" watchObservedRunningTime="2025-11-29 10:22:27.548251163 +0000 UTC m=+36.010312747"
	Nov 29 10:22:31 embed-certs-708011 kubelet[791]: I1129 10:22:31.531609     791 scope.go:117] "RemoveContainer" containerID="e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06"
	Nov 29 10:22:34 embed-certs-708011 kubelet[791]: I1129 10:22:34.691663     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:34 embed-certs-708011 kubelet[791]: E1129 10:22:34.691844     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:47 embed-certs-708011 kubelet[791]: I1129 10:22:47.991971     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:48 embed-certs-708011 kubelet[791]: I1129 10:22:48.579150     791 scope.go:117] "RemoveContainer" containerID="275b3e51ccec6e123fcf1c4b203883fc1a8867c3778cba53bb5a387925cd272c"
	Nov 29 10:22:48 embed-certs-708011 kubelet[791]: I1129 10:22:48.579465     791 scope.go:117] "RemoveContainer" containerID="05c5efef0f56ea5f8f85c9776e6c30f22e870f8c669c726d80a929c51b387025"
	Nov 29 10:22:48 embed-certs-708011 kubelet[791]: E1129 10:22:48.579618     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2q2nz_kubernetes-dashboard(55e067ae-516b-4227-9193-deae2d76bee7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2q2nz" podUID="55e067ae-516b-4227-9193-deae2d76bee7"
	Nov 29 10:22:52 embed-certs-708011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:22:52 embed-certs-708011 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:22:52 embed-certs-708011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4c2d7f74c7191244238581233d9fc0da4fd49058d128ed3fc102d7709d1e9f02] <==
	2025/11/29 10:22:17 Using namespace: kubernetes-dashboard
	2025/11/29 10:22:17 Using in-cluster config to connect to apiserver
	2025/11/29 10:22:17 Using secret token for csrf signing
	2025/11/29 10:22:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:22:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:22:17 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 10:22:17 Generating JWE encryption key
	2025/11/29 10:22:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:22:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:22:18 Initializing JWE encryption key from synchronized object
	2025/11/29 10:22:18 Creating in-cluster Sidecar client
	2025/11/29 10:22:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:22:18 Serving insecurely on HTTP port: 9090
	2025/11/29 10:22:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:22:17 Starting overwatch
	
	
	==> storage-provisioner [52238fd1860d97914086f8137b7f7e753619496f7c2af91747d0ec80d787605a] <==
	I1129 10:22:31.628310       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:22:31.659514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:22:31.659645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:22:31.662946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:35.117776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:39.377516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:42.976793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:46.031424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:49.053306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:49.058175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:22:49.058327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:22:49.058828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8f51354-af5a-4f25-a98a-e2bfddbbd579", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-708011_0356dd93-f70a-489f-bd37-d80aa188eb6b became leader
	I1129 10:22:49.058877       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-708011_0356dd93-f70a-489f-bd37-d80aa188eb6b!
	W1129 10:22:49.061624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:49.070350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:22:49.159786       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-708011_0356dd93-f70a-489f-bd37-d80aa188eb6b!
	W1129 10:22:51.074721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:51.079643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:53.084820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:53.103114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:55.107196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:55.114271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:57.128904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:57.136938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e1fb3814acf1921cd5370d2ca7bf5102649547004e64a251d0e451b1e5b03c06] <==
	I1129 10:22:00.649386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:22:30.651496       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-708011 -n embed-certs-708011
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-708011 -n embed-certs-708011: exit status 2 (454.299994ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-708011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (412.541074ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:23:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-949993 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-949993 describe deploy/metrics-server -n kube-system: exit status 1 (246.772679ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-949993 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-949993
helpers_test.go:243: (dbg) docker inspect no-preload-949993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3",
	        "Created": "2025-11-29T10:21:50.556040223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502434,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:21:50.636514411Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/hosts",
	        "LogPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3-json.log",
	        "Name": "/no-preload-949993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-949993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-949993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3",
	                "LowerDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-949993",
	                "Source": "/var/lib/docker/volumes/no-preload-949993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-949993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-949993",
	                "name.minikube.sigs.k8s.io": "no-preload-949993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4cb06400991c2dfced103b26345619c699d6ffe1ec7482b17be177b1adb73f42",
	            "SandboxKey": "/var/run/docker/netns/4cb06400991c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-949993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:28:74:09:48:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62b6fd5a8cb74510b8e0db3c4b4e346db103446743514dcfc437d8e74be8a4c3",
	                    "EndpointID": "52d3bf3938d28a11dfb7bb9ef58a82589d583aa3e5a569a23ce688e71592a933",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-949993",
	                        "01cb8829dafd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-949993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-949993 logs -n 25: (1.334999615s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-033056                                                                                                                                                                                                                        │ cert-options-033056          │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:17 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-685516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-685516 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-685516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:18 UTC │
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:19 UTC │
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                                                                                               │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ stop    │ -p embed-certs-708011 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ delete  │ -p cert-expiration-930117                                                                                                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-259491                                                                                                                                                                                                               │ disable-driver-mounts-259491 │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                                                                                                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:23 UTC │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:23:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:23:03.261024  507966 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:23:03.261208  507966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:03.261246  507966 out.go:374] Setting ErrFile to fd 2...
	I1129 10:23:03.261268  507966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:03.261655  507966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:23:03.262222  507966 out.go:368] Setting JSON to false
	I1129 10:23:03.263374  507966 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11133,"bootTime":1764400651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:23:03.263472  507966 start.go:143] virtualization:  
	I1129 10:23:03.267325  507966 out.go:179] * [default-k8s-diff-port-194354] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:23:03.271543  507966 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:23:03.271644  507966 notify.go:221] Checking for updates...
	I1129 10:23:03.277877  507966 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:23:03.280877  507966 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:03.284318  507966 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:23:03.287354  507966 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:23:03.290275  507966 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:23:03.293694  507966 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:03.293781  507966 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:23:03.330486  507966 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:23:03.330618  507966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:03.390869  507966 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:23:03.380288968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:03.390983  507966 docker.go:319] overlay module found
	I1129 10:23:03.394272  507966 out.go:179] * Using the docker driver based on user configuration
	I1129 10:23:03.397105  507966 start.go:309] selected driver: docker
	I1129 10:23:03.397124  507966 start.go:927] validating driver "docker" against <nil>
	I1129 10:23:03.397137  507966 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:23:03.397859  507966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:03.451072  507966 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:23:03.441044998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:03.451225  507966 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 10:23:03.451442  507966 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:23:03.454426  507966 out.go:179] * Using Docker driver with root privileges
	I1129 10:23:03.457244  507966 cni.go:84] Creating CNI manager for ""
	I1129 10:23:03.457322  507966 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:03.457339  507966 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 10:23:03.457422  507966 start.go:353] cluster config:
	{Name:default-k8s-diff-port-194354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-194354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:03.460525  507966 out.go:179] * Starting "default-k8s-diff-port-194354" primary control-plane node in "default-k8s-diff-port-194354" cluster
	I1129 10:23:03.463328  507966 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:23:03.466284  507966 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:23:03.469118  507966 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:03.469164  507966 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:23:03.469176  507966 cache.go:65] Caching tarball of preloaded images
	I1129 10:23:03.469203  507966 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:23:03.470146  507966 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:23:03.470172  507966 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:23:03.470275  507966 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/config.json ...
	I1129 10:23:03.470295  507966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/config.json: {Name:mkeb18245f4e856def75ebd2ed9d7f419fc7cebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:03.495181  507966 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:23:03.495208  507966 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:23:03.495223  507966 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:23:03.495254  507966 start.go:360] acquireMachinesLock for default-k8s-diff-port-194354: {Name:mk7fca26c3bc028a411ed52e5a78e2fb6f90caca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:03.495376  507966 start.go:364] duration metric: took 96.231µs to acquireMachinesLock for "default-k8s-diff-port-194354"
	I1129 10:23:03.495410  507966 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-194354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-194354 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:23:03.495480  507966 start.go:125] createHost starting for "" (driver="docker")
	I1129 10:23:03.499537  507966 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 10:23:03.499798  507966 start.go:159] libmachine.API.Create for "default-k8s-diff-port-194354" (driver="docker")
	I1129 10:23:03.499833  507966 client.go:173] LocalClient.Create starting
	I1129 10:23:03.499909  507966 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 10:23:03.499951  507966 main.go:143] libmachine: Decoding PEM data...
	I1129 10:23:03.499972  507966 main.go:143] libmachine: Parsing certificate...
	I1129 10:23:03.500029  507966 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 10:23:03.500058  507966 main.go:143] libmachine: Decoding PEM data...
	I1129 10:23:03.500075  507966 main.go:143] libmachine: Parsing certificate...
	I1129 10:23:03.500485  507966 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-194354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 10:23:03.517390  507966 cli_runner.go:211] docker network inspect default-k8s-diff-port-194354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 10:23:03.517480  507966 network_create.go:284] running [docker network inspect default-k8s-diff-port-194354] to gather additional debugging logs...
	I1129 10:23:03.517498  507966 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-194354
	W1129 10:23:03.534456  507966 cli_runner.go:211] docker network inspect default-k8s-diff-port-194354 returned with exit code 1
	I1129 10:23:03.534489  507966 network_create.go:287] error running [docker network inspect default-k8s-diff-port-194354]: docker network inspect default-k8s-diff-port-194354: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-194354 not found
	I1129 10:23:03.534503  507966 network_create.go:289] output of [docker network inspect default-k8s-diff-port-194354]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-194354 not found
	
	** /stderr **
	I1129 10:23:03.534599  507966 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:23:03.551498  507966 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
	I1129 10:23:03.551924  507966 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf66364546bb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1a:25:6d:94:37:dd} reservation:<nil>}
	I1129 10:23:03.552179  507966 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d78444b552f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:b1:d6:7c:04:eb} reservation:<nil>}
	I1129 10:23:03.552471  507966 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62b6fd5a8cb7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:67:a9:2f:c4:62} reservation:<nil>}
	I1129 10:23:03.552870  507966 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a22c90}
	I1129 10:23:03.552887  507966 network_create.go:124] attempt to create docker network default-k8s-diff-port-194354 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 10:23:03.552949  507966 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-194354 default-k8s-diff-port-194354
	I1129 10:23:03.612944  507966 network_create.go:108] docker network default-k8s-diff-port-194354 192.168.85.0/24 created
	I1129 10:23:03.612981  507966 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-194354" container
	I1129 10:23:03.613073  507966 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 10:23:03.629508  507966 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-194354 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-194354 --label created_by.minikube.sigs.k8s.io=true
	I1129 10:23:03.653094  507966 oci.go:103] Successfully created a docker volume default-k8s-diff-port-194354
	I1129 10:23:03.653194  507966 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-194354-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-194354 --entrypoint /usr/bin/test -v default-k8s-diff-port-194354:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 10:23:04.221182  507966 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-194354
	I1129 10:23:04.221257  507966 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:04.221273  507966 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 10:23:04.221360  507966 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-194354:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 29 10:22:56 no-preload-949993 crio[838]: time="2025-11-29T10:22:56.879356069Z" level=info msg="Created container 3185e398968ecae947fbf64061fd1fd24f1dca211e0feb87d2f56b1f8352c9ce: kube-system/coredns-66bc5c9577-vcgbt/coredns" id=0059255d-0615-4e90-923c-f0ff2c268a0f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:22:56 no-preload-949993 crio[838]: time="2025-11-29T10:22:56.880328758Z" level=info msg="Starting container: 3185e398968ecae947fbf64061fd1fd24f1dca211e0feb87d2f56b1f8352c9ce" id=13e237f8-3a9d-4fc7-81bc-afe2eb28fb84 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:22:56 no-preload-949993 crio[838]: time="2025-11-29T10:22:56.886585512Z" level=info msg="Started container" PID=2485 containerID=3185e398968ecae947fbf64061fd1fd24f1dca211e0feb87d2f56b1f8352c9ce description=kube-system/coredns-66bc5c9577-vcgbt/coredns id=13e237f8-3a9d-4fc7-81bc-afe2eb28fb84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a7062a2321219b5c85f552e36b3d2ff68accc2038f6b4996bf493825721129f
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.42696307Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e52626e3-bb3b-4f37-8f3c-99d0ae625b0e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.427055305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.437575084Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:96a2c1ffefc0e6d2b2890600aba3aa3a7bae9599b2312becde0e26d37c1393d6 UID:3fdb1ddf-7704-4f35-9630-eb7a372800cd NetNS:/var/run/netns/c5a78856-6af6-4bf4-8d6e-cf7fc70fe74f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026be6c8}] Aliases:map[]}"
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.437936549Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.457453004Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:96a2c1ffefc0e6d2b2890600aba3aa3a7bae9599b2312becde0e26d37c1393d6 UID:3fdb1ddf-7704-4f35-9630-eb7a372800cd NetNS:/var/run/netns/c5a78856-6af6-4bf4-8d6e-cf7fc70fe74f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026be6c8}] Aliases:map[]}"
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.457783379Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.466412642Z" level=info msg="Ran pod sandbox 96a2c1ffefc0e6d2b2890600aba3aa3a7bae9599b2312becde0e26d37c1393d6 with infra container: default/busybox/POD" id=e52626e3-bb3b-4f37-8f3c-99d0ae625b0e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.468156564Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=42111b1a-64b7-4936-9b88-533068110575 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.468748702Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=42111b1a-64b7-4936-9b88-533068110575 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.468918107Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=42111b1a-64b7-4936-9b88-533068110575 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.472121501Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0df85e63-808c-47c5-a39c-d56d51cc0652 name=/runtime.v1.ImageService/PullImage
	Nov 29 10:23:00 no-preload-949993 crio[838]: time="2025-11-29T10:23:00.474027706Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.623562265Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0df85e63-808c-47c5-a39c-d56d51cc0652 name=/runtime.v1.ImageService/PullImage
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.624243668Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ae2ecba4-fac6-442d-b2fa-7f3a1d525ca0 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.628505502Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=97235108-8c8f-4059-9111-c6910cd62505 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.638675042Z" level=info msg="Creating container: default/busybox/busybox" id=903a71ae-a7ff-405e-b9cf-92adbd3e49a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.638849895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.644970172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.645490318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.668646004Z" level=info msg="Created container c0e0093e084cbf3953f694483956f63a5a38ed3901bb0cd4454a9a48ad11a9bd: default/busybox/busybox" id=903a71ae-a7ff-405e-b9cf-92adbd3e49a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.672032735Z" level=info msg="Starting container: c0e0093e084cbf3953f694483956f63a5a38ed3901bb0cd4454a9a48ad11a9bd" id=b355a647-06ac-4131-a740-375ec433951e name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:23:02 no-preload-949993 crio[838]: time="2025-11-29T10:23:02.676283583Z" level=info msg="Started container" PID=2535 containerID=c0e0093e084cbf3953f694483956f63a5a38ed3901bb0cd4454a9a48ad11a9bd description=default/busybox/busybox id=b355a647-06ac-4131-a740-375ec433951e name=/runtime.v1.RuntimeService/StartContainer sandboxID=96a2c1ffefc0e6d2b2890600aba3aa3a7bae9599b2312becde0e26d37c1393d6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c0e0093e084cb       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   96a2c1ffefc0e       busybox                                     default
	3185e398968ec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   6a7062a232121       coredns-66bc5c9577-vcgbt                    kube-system
	ffbeb8ba04e7d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   a67248a07a5c6       storage-provisioner                         kube-system
	a81621ac0affe       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   b40193749863a       kindnet-jxmnq                               kube-system
	2a03ae6342236       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   25d58e88d4a63       kube-proxy-ffl4g                            kube-system
	0d7eb9a3646af       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      40 seconds ago      Running             kube-apiserver            0                   f44f92c9546a6       kube-apiserver-no-preload-949993            kube-system
	49dea87e9c871       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      40 seconds ago      Running             kube-scheduler            0                   66a77e0d4e60f       kube-scheduler-no-preload-949993            kube-system
	2ced545b57bc0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      40 seconds ago      Running             kube-controller-manager   0                   066dfd071c1d7       kube-controller-manager-no-preload-949993   kube-system
	0210f6bf7b478       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      40 seconds ago      Running             etcd                      0                   1aaaae958641e       etcd-no-preload-949993                      kube-system
	
	
	==> coredns [3185e398968ecae947fbf64061fd1fd24f1dca211e0feb87d2f56b1f8352c9ce] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59903 - 23038 "HINFO IN 6468242646610201255.7645085303278850577. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03601859s
	
	
	==> describe nodes <==
	Name:               no-preload-949993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-949993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-949993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_22_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:22:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-949993
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:23:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:23:08 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:23:08 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:23:08 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:23:08 +0000   Sat, 29 Nov 2025 10:22:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-949993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                5439880d-b2ce-4fc8-b8d7-05ac5d12654c
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-vcgbt                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-949993                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-jxmnq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-949993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-949993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-ffl4g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-949993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-949993 event: Registered Node no-preload-949993 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-949993 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 09:53] overlayfs: idmapped layers are currently not supported
	[Nov29 09:54] overlayfs: idmapped layers are currently not supported
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0210f6bf7b47868ffecaa4b027a247624f52859b7d69f17d9c41ce55232ad6ea] <==
	{"level":"warn","ts":"2025-11-29T10:22:32.697914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.730815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.771534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.787823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.812900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.854244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.874155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.905089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.935055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:32.993196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.029607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.067654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.093740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.125006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.148450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.179741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.206831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.237376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.261923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.302315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.346146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.372776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.410491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.436024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:22:33.546146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33536","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:23:11 up  3:05,  0 user,  load average: 4.92, 3.50, 2.67
	Linux no-preload-949993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a81621ac0affed3df63f4f286fed7ce96c22f4c2208daaedc7527d3ce5381b5a] <==
	I1129 10:22:45.923806       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:22:46.014409       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:22:46.014659       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:22:46.014801       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:22:46.014830       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:22:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:22:46.220155       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:22:46.220244       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:22:46.220278       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:22:46.220431       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 10:22:46.420489       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:22:46.420640       1 metrics.go:72] Registering metrics
	I1129 10:22:46.420955       1 controller.go:711] "Syncing nftables rules"
	I1129 10:22:56.222808       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:22:56.222862       1 main.go:301] handling current node
	I1129 10:23:06.218210       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:23:06.218318       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d7eb9a3646aff9c3d256bdb7ce12c6df002281f4ab2d5cbded23c751c6d1edc] <==
	I1129 10:22:34.581228       1 controller.go:667] quota admission added evaluator for: namespaces
	E1129 10:22:34.581520       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1129 10:22:34.583027       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:22:34.628659       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:22:34.635425       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 10:22:34.686707       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:22:34.686825       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:22:34.821280       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:22:35.249247       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 10:22:35.256543       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 10:22:35.256572       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:22:36.045164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:22:36.105344       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:22:36.247283       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 10:22:36.254559       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 10:22:36.255756       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:22:36.265324       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:22:36.285408       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:22:37.287274       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:22:37.335392       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 10:22:37.352500       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 10:22:42.091478       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 10:22:42.209707       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:22:42.242004       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:22:42.273778       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2ced545b57bc04636eb5a0f6e76ecd82005d385a374df8e160d4464736f45b7f] <==
	I1129 10:22:41.311025       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 10:22:41.311055       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 10:22:41.311083       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 10:22:41.317856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:22:41.320107       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:22:41.321338       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-949993" podCIDRs=["10.244.0.0/24"]
	I1129 10:22:41.328320       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 10:22:41.332894       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:22:41.332999       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:22:41.333031       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:22:41.333605       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:22:41.334112       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:22:41.335305       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:22:41.335377       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 10:22:41.335959       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 10:22:41.337154       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:22:41.338309       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 10:22:41.338596       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 10:22:41.338623       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 10:22:41.338871       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:22:41.340102       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 10:22:41.344340       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 10:22:41.344348       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:22:41.347696       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 10:23:01.287111       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2a03ae6342236ae7ef579089fd45de99dcb601f51af684b4aeac6d44ee222b9b] <==
	I1129 10:22:43.240125       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:22:43.434778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:22:43.535665       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:22:43.535696       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:22:43.535825       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:22:43.574016       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:22:43.574111       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:22:43.579958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:22:43.587978       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:22:43.588072       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:22:43.589620       1 config.go:200] "Starting service config controller"
	I1129 10:22:43.589632       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:22:43.589650       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:22:43.589655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:22:43.589683       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:22:43.589687       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:22:43.590385       1 config.go:309] "Starting node config controller"
	I1129 10:22:43.590393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:22:43.590399       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:22:43.690491       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:22:43.690559       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:22:43.690679       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [49dea87e9c871050feaa701736af70a3391385f614b52cd60464923f5d3a8579] <==
	I1129 10:22:35.494729       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:22:35.510532       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1129 10:22:35.521101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:22:35.521249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1129 10:22:35.522592       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:22:35.528688       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:22:35.528830       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1129 10:22:35.530322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:22:35.530403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 10:22:35.530451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 10:22:35.530587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 10:22:35.530641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:22:35.530688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:22:35.530734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:22:35.530855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:22:35.530901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 10:22:35.531042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 10:22:35.531122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:22:35.531177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:22:35.533310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 10:22:35.538451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:22:35.538784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 10:22:35.538878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:22:35.541270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1129 10:22:36.629157       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: I1129 10:22:42.214139    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f62b4d17-773c-4a38-ba6c-4ac103f38b3d-kube-proxy\") pod \"kube-proxy-ffl4g\" (UID: \"f62b4d17-773c-4a38-ba6c-4ac103f38b3d\") " pod="kube-system/kube-proxy-ffl4g"
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: I1129 10:22:42.214157    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f62b4d17-773c-4a38-ba6c-4ac103f38b3d-lib-modules\") pod \"kube-proxy-ffl4g\" (UID: \"f62b4d17-773c-4a38-ba6c-4ac103f38b3d\") " pod="kube-system/kube-proxy-ffl4g"
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: I1129 10:22:42.214189    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fb632bfa-f7ff-459c-8b50-8213e1d36462-cni-cfg\") pod \"kindnet-jxmnq\" (UID: \"fb632bfa-f7ff-459c-8b50-8213e1d36462\") " pod="kube-system/kindnet-jxmnq"
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: I1129 10:22:42.214211    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87xs\" (UniqueName: \"kubernetes.io/projected/fb632bfa-f7ff-459c-8b50-8213e1d36462-kube-api-access-s87xs\") pod \"kindnet-jxmnq\" (UID: \"fb632bfa-f7ff-459c-8b50-8213e1d36462\") " pod="kube-system/kindnet-jxmnq"
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: E1129 10:22:42.411368    1997 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: E1129 10:22:42.411403    1997 projected.go:196] Error preparing data for projected volume kube-api-access-c4x67 for pod kube-system/kube-proxy-ffl4g: configmap "kube-root-ca.crt" not found
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: E1129 10:22:42.411485    1997 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f62b4d17-773c-4a38-ba6c-4ac103f38b3d-kube-api-access-c4x67 podName:f62b4d17-773c-4a38-ba6c-4ac103f38b3d nodeName:}" failed. No retries permitted until 2025-11-29 10:22:42.911457929 +0000 UTC m=+5.768542417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c4x67" (UniqueName: "kubernetes.io/projected/f62b4d17-773c-4a38-ba6c-4ac103f38b3d-kube-api-access-c4x67") pod "kube-proxy-ffl4g" (UID: "f62b4d17-773c-4a38-ba6c-4ac103f38b3d") : configmap "kube-root-ca.crt" not found
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: E1129 10:22:42.411885    1997 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: E1129 10:22:42.411902    1997 projected.go:196] Error preparing data for projected volume kube-api-access-s87xs for pod kube-system/kindnet-jxmnq: configmap "kube-root-ca.crt" not found
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: E1129 10:22:42.411942    1997 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb632bfa-f7ff-459c-8b50-8213e1d36462-kube-api-access-s87xs podName:fb632bfa-f7ff-459c-8b50-8213e1d36462 nodeName:}" failed. No retries permitted until 2025-11-29 10:22:42.91192886 +0000 UTC m=+5.769013348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s87xs" (UniqueName: "kubernetes.io/projected/fb632bfa-f7ff-459c-8b50-8213e1d36462-kube-api-access-s87xs") pod "kindnet-jxmnq" (UID: "fb632bfa-f7ff-459c-8b50-8213e1d36462") : configmap "kube-root-ca.crt" not found
	Nov 29 10:22:42 no-preload-949993 kubelet[1997]: I1129 10:22:42.926876    1997 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 10:22:43 no-preload-949993 kubelet[1997]: W1129 10:22:43.096851    1997 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/crio-25d58e88d4a6319fd2062b2694ff86887b67d6ae935f4212d0637debb5adf3d8 WatchSource:0}: Error finding container 25d58e88d4a6319fd2062b2694ff86887b67d6ae935f4212d0637debb5adf3d8: Status 404 returned error can't find the container with id 25d58e88d4a6319fd2062b2694ff86887b67d6ae935f4212d0637debb5adf3d8
	Nov 29 10:22:46 no-preload-949993 kubelet[1997]: I1129 10:22:46.442757    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ffl4g" podStartSLOduration=4.442739058 podStartE2EDuration="4.442739058s" podCreationTimestamp="2025-11-29 10:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:22:43.438897066 +0000 UTC m=+6.295981562" watchObservedRunningTime="2025-11-29 10:22:46.442739058 +0000 UTC m=+9.299823554"
	Nov 29 10:22:47 no-preload-949993 kubelet[1997]: I1129 10:22:47.473403    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jxmnq" podStartSLOduration=2.720925031 podStartE2EDuration="5.47337845s" podCreationTimestamp="2025-11-29 10:22:42 +0000 UTC" firstStartedPulling="2025-11-29 10:22:43.077798118 +0000 UTC m=+5.934882606" lastFinishedPulling="2025-11-29 10:22:45.830251537 +0000 UTC m=+8.687336025" observedRunningTime="2025-11-29 10:22:46.444333555 +0000 UTC m=+9.301418043" watchObservedRunningTime="2025-11-29 10:22:47.47337845 +0000 UTC m=+10.330462946"
	Nov 29 10:22:56 no-preload-949993 kubelet[1997]: I1129 10:22:56.350174    1997 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 10:22:56 no-preload-949993 kubelet[1997]: I1129 10:22:56.527413    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52333222-cd4d-4c66-aa3e-1aa0fa9e1078-config-volume\") pod \"coredns-66bc5c9577-vcgbt\" (UID: \"52333222-cd4d-4c66-aa3e-1aa0fa9e1078\") " pod="kube-system/coredns-66bc5c9577-vcgbt"
	Nov 29 10:22:56 no-preload-949993 kubelet[1997]: I1129 10:22:56.527472    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkpxf\" (UniqueName: \"kubernetes.io/projected/52333222-cd4d-4c66-aa3e-1aa0fa9e1078-kube-api-access-hkpxf\") pod \"coredns-66bc5c9577-vcgbt\" (UID: \"52333222-cd4d-4c66-aa3e-1aa0fa9e1078\") " pod="kube-system/coredns-66bc5c9577-vcgbt"
	Nov 29 10:22:56 no-preload-949993 kubelet[1997]: I1129 10:22:56.527496    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b85d010c-01c5-42c7-83b9-578437039e17-tmp\") pod \"storage-provisioner\" (UID: \"b85d010c-01c5-42c7-83b9-578437039e17\") " pod="kube-system/storage-provisioner"
	Nov 29 10:22:56 no-preload-949993 kubelet[1997]: I1129 10:22:56.527518    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrg8\" (UniqueName: \"kubernetes.io/projected/b85d010c-01c5-42c7-83b9-578437039e17-kube-api-access-vfrg8\") pod \"storage-provisioner\" (UID: \"b85d010c-01c5-42c7-83b9-578437039e17\") " pod="kube-system/storage-provisioner"
	Nov 29 10:22:56 no-preload-949993 kubelet[1997]: W1129 10:22:56.743175    1997 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/crio-a67248a07a5c668dfcd9392779ad878cba32af87130654e13ea7cc4ef1d9806f WatchSource:0}: Error finding container a67248a07a5c668dfcd9392779ad878cba32af87130654e13ea7cc4ef1d9806f: Status 404 returned error can't find the container with id a67248a07a5c668dfcd9392779ad878cba32af87130654e13ea7cc4ef1d9806f
	Nov 29 10:22:56 no-preload-949993 kubelet[1997]: W1129 10:22:56.783031    1997 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/crio-6a7062a2321219b5c85f552e36b3d2ff68accc2038f6b4996bf493825721129f WatchSource:0}: Error finding container 6a7062a2321219b5c85f552e36b3d2ff68accc2038f6b4996bf493825721129f: Status 404 returned error can't find the container with id 6a7062a2321219b5c85f552e36b3d2ff68accc2038f6b4996bf493825721129f
	Nov 29 10:22:57 no-preload-949993 kubelet[1997]: I1129 10:22:57.513520    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vcgbt" podStartSLOduration=15.513482646 podStartE2EDuration="15.513482646s" podCreationTimestamp="2025-11-29 10:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:22:57.513386604 +0000 UTC m=+20.370471100" watchObservedRunningTime="2025-11-29 10:22:57.513482646 +0000 UTC m=+20.370567200"
	Nov 29 10:22:57 no-preload-949993 kubelet[1997]: I1129 10:22:57.514306    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.514292584 podStartE2EDuration="14.514292584s" podCreationTimestamp="2025-11-29 10:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:22:57.487867789 +0000 UTC m=+20.344952285" watchObservedRunningTime="2025-11-29 10:22:57.514292584 +0000 UTC m=+20.371377187"
	Nov 29 10:22:59 no-preload-949993 kubelet[1997]: I1129 10:22:59.963858    1997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt6t\" (UniqueName: \"kubernetes.io/projected/3fdb1ddf-7704-4f35-9630-eb7a372800cd-kube-api-access-8rt6t\") pod \"busybox\" (UID: \"3fdb1ddf-7704-4f35-9630-eb7a372800cd\") " pod="default/busybox"
	Nov 29 10:23:00 no-preload-949993 kubelet[1997]: W1129 10:23:00.464081    1997 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/crio-96a2c1ffefc0e6d2b2890600aba3aa3a7bae9599b2312becde0e26d37c1393d6 WatchSource:0}: Error finding container 96a2c1ffefc0e6d2b2890600aba3aa3a7bae9599b2312becde0e26d37c1393d6: Status 404 returned error can't find the container with id 96a2c1ffefc0e6d2b2890600aba3aa3a7bae9599b2312becde0e26d37c1393d6
	
	
	==> storage-provisioner [ffbeb8ba04e7d1694c10e98ba169a74efba29c86396347e60c940a49363d24cd] <==
	I1129 10:22:56.848399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:22:56.876976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:22:56.877049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:22:56.894637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:56.914474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:22:56.914747       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:22:56.917831       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-949993_74b0d43c-9203-42e4-980f-cb39e36e9f77!
	I1129 10:22:56.914827       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad137e71-6950-4dad-a697-38d979710672", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-949993_74b0d43c-9203-42e4-980f-cb39e36e9f77 became leader
	W1129 10:22:56.920551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:56.941177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:22:57.022727       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-949993_74b0d43c-9203-42e4-980f-cb39e36e9f77!
	W1129 10:22:58.947430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:22:58.952026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:00.955239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:00.962261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:02.965555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:02.973053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:04.976640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:04.985556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:06.990205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:06.996952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:09.005193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:09.028456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:11.035068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:23:11.046774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949993 -n no-preload-949993
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-949993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-949993 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-949993 --alsologtostderr -v=1: exit status 80 (2.023089301s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-949993 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 10:24:32.247508  513480 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:24:32.247655  513480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:24:32.247694  513480 out.go:374] Setting ErrFile to fd 2...
	I1129 10:24:32.247706  513480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:24:32.248090  513480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:24:32.248834  513480 out.go:368] Setting JSON to false
	I1129 10:24:32.248899  513480 mustload.go:66] Loading cluster: no-preload-949993
	I1129 10:24:32.249341  513480 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:24:32.249925  513480 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:24:32.275122  513480 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:24:32.275499  513480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:24:32.344584  513480 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:24:32.335030791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:24:32.345272  513480 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-949993 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 10:24:32.348439  513480 out.go:179] * Pausing node no-preload-949993 ... 
	I1129 10:24:32.351123  513480 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:24:32.351509  513480 ssh_runner.go:195] Run: systemctl --version
	I1129 10:24:32.351564  513480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:24:32.369994  513480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:24:32.477155  513480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:24:32.495786  513480 pause.go:52] kubelet running: true
	I1129 10:24:32.495863  513480 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:24:32.778431  513480 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:24:32.778521  513480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:24:32.855194  513480 cri.go:89] found id: "0d6f7b88b3afc61aa717c1348d07d0eac84c75a53fcc5fe33c5fde61127d07c6"
	I1129 10:24:32.855218  513480 cri.go:89] found id: "7ebed6c6ed7cd2d1b7be212ae286908fb6ab40e4e3423dc80536de93d275207c"
	I1129 10:24:32.855223  513480 cri.go:89] found id: "edde7d522ee76ca987e27608ab5a2d4ac968957b65986bd758dc0841ffba33e2"
	I1129 10:24:32.855228  513480 cri.go:89] found id: "0bd947bb8314f6126db11c7ce0f7f06d2894741d282df841341a8467fadae7c6"
	I1129 10:24:32.855231  513480 cri.go:89] found id: "c37befce33bd34467be730cc4f6db56a780f49dc021a34f8f8ee923f16d80c0e"
	I1129 10:24:32.855235  513480 cri.go:89] found id: "0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d"
	I1129 10:24:32.855238  513480 cri.go:89] found id: "3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7"
	I1129 10:24:32.855246  513480 cri.go:89] found id: "e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4"
	I1129 10:24:32.855250  513480 cri.go:89] found id: "81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a"
	I1129 10:24:32.855258  513480 cri.go:89] found id: "824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49"
	I1129 10:24:32.855261  513480 cri.go:89] found id: "8073d01e01f256d04be1a4778a38b492288c905eb68f7f59e9e88869f602b4c9"
	I1129 10:24:32.855264  513480 cri.go:89] found id: ""
	I1129 10:24:32.855315  513480 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:24:32.875429  513480 retry.go:31] will retry after 364.203545ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:24:32Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:24:33.239962  513480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:24:33.253471  513480 pause.go:52] kubelet running: false
	I1129 10:24:33.253535  513480 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:24:33.430147  513480 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:24:33.430286  513480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:24:33.503195  513480 cri.go:89] found id: "0d6f7b88b3afc61aa717c1348d07d0eac84c75a53fcc5fe33c5fde61127d07c6"
	I1129 10:24:33.503224  513480 cri.go:89] found id: "7ebed6c6ed7cd2d1b7be212ae286908fb6ab40e4e3423dc80536de93d275207c"
	I1129 10:24:33.503229  513480 cri.go:89] found id: "edde7d522ee76ca987e27608ab5a2d4ac968957b65986bd758dc0841ffba33e2"
	I1129 10:24:33.503233  513480 cri.go:89] found id: "0bd947bb8314f6126db11c7ce0f7f06d2894741d282df841341a8467fadae7c6"
	I1129 10:24:33.503236  513480 cri.go:89] found id: "c37befce33bd34467be730cc4f6db56a780f49dc021a34f8f8ee923f16d80c0e"
	I1129 10:24:33.503240  513480 cri.go:89] found id: "0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d"
	I1129 10:24:33.503243  513480 cri.go:89] found id: "3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7"
	I1129 10:24:33.503246  513480 cri.go:89] found id: "e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4"
	I1129 10:24:33.503249  513480 cri.go:89] found id: "81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a"
	I1129 10:24:33.503255  513480 cri.go:89] found id: "824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49"
	I1129 10:24:33.503258  513480 cri.go:89] found id: "8073d01e01f256d04be1a4778a38b492288c905eb68f7f59e9e88869f602b4c9"
	I1129 10:24:33.503262  513480 cri.go:89] found id: ""
	I1129 10:24:33.503316  513480 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:24:33.514521  513480 retry.go:31] will retry after 403.891732ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:24:33Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:24:33.919164  513480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:24:33.932245  513480 pause.go:52] kubelet running: false
	I1129 10:24:33.932340  513480 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:24:34.114950  513480 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:24:34.115052  513480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:24:34.185961  513480 cri.go:89] found id: "0d6f7b88b3afc61aa717c1348d07d0eac84c75a53fcc5fe33c5fde61127d07c6"
	I1129 10:24:34.185992  513480 cri.go:89] found id: "7ebed6c6ed7cd2d1b7be212ae286908fb6ab40e4e3423dc80536de93d275207c"
	I1129 10:24:34.185998  513480 cri.go:89] found id: "edde7d522ee76ca987e27608ab5a2d4ac968957b65986bd758dc0841ffba33e2"
	I1129 10:24:34.186002  513480 cri.go:89] found id: "0bd947bb8314f6126db11c7ce0f7f06d2894741d282df841341a8467fadae7c6"
	I1129 10:24:34.186005  513480 cri.go:89] found id: "c37befce33bd34467be730cc4f6db56a780f49dc021a34f8f8ee923f16d80c0e"
	I1129 10:24:34.186009  513480 cri.go:89] found id: "0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d"
	I1129 10:24:34.186012  513480 cri.go:89] found id: "3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7"
	I1129 10:24:34.186015  513480 cri.go:89] found id: "e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4"
	I1129 10:24:34.186018  513480 cri.go:89] found id: "81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a"
	I1129 10:24:34.186044  513480 cri.go:89] found id: "824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49"
	I1129 10:24:34.186053  513480 cri.go:89] found id: "8073d01e01f256d04be1a4778a38b492288c905eb68f7f59e9e88869f602b4c9"
	I1129 10:24:34.186057  513480 cri.go:89] found id: ""
	I1129 10:24:34.186143  513480 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:24:34.201538  513480 out.go:203] 
	W1129 10:24:34.204392  513480 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:24:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:24:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 10:24:34.204458  513480 out.go:285] * 
	* 
	W1129 10:24:34.211634  513480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 10:24:34.214493  513480 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-949993 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-949993
helpers_test.go:243: (dbg) docker inspect no-preload-949993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3",
	        "Created": "2025-11-29T10:21:50.556040223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510706,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:23:24.739863264Z",
	            "FinishedAt": "2025-11-29T10:23:23.658337472Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/hosts",
	        "LogPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3-json.log",
	        "Name": "/no-preload-949993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-949993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-949993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3",
	                "LowerDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-949993",
	                "Source": "/var/lib/docker/volumes/no-preload-949993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-949993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-949993",
	                "name.minikube.sigs.k8s.io": "no-preload-949993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0184fde988acfcb947a9f2cad32aad001c6f35990995251ab8db8a05779b7731",
	            "SandboxKey": "/var/run/docker/netns/0184fde988ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-949993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:7a:21:e4:92:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62b6fd5a8cb74510b8e0db3c4b4e346db103446743514dcfc437d8e74be8a4c3",
	                    "EndpointID": "6abf395f43fd246877843c5c1540a9d4538e21f557623ca3d5f4c397a9140a94",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-949993",
	                        "01cb8829dafd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993: exit status 2 (358.112277ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-949993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-949993 logs -n 25: (1.327101052s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:18 UTC │ 29 Nov 25 10:19 UTC │
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                                                                                               │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                                                                                     │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ stop    │ -p embed-certs-708011 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ delete  │ -p cert-expiration-930117                                                                                                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-259491                                                                                                                                                                                                               │ disable-driver-mounts-259491 │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                                                                                                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:23 UTC │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                                                                                                    │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:23:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:23:24.356728  510582 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:23:24.356918  510582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:24.356945  510582 out.go:374] Setting ErrFile to fd 2...
	I1129 10:23:24.356964  510582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:24.357268  510582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:23:24.357953  510582 out.go:368] Setting JSON to false
	I1129 10:23:24.358973  510582 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11154,"bootTime":1764400651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:23:24.359078  510582 start.go:143] virtualization:  
	I1129 10:23:24.362143  510582 out.go:179] * [no-preload-949993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:23:24.366229  510582 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:23:24.366312  510582 notify.go:221] Checking for updates...
	I1129 10:23:24.370325  510582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:23:24.373233  510582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:24.376196  510582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:23:24.379186  510582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:23:24.382194  510582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:23:24.385561  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:24.386290  510582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:23:24.432374  510582 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:23:24.432489  510582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:24.519043  510582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:23:24.509678937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:24.519145  510582 docker.go:319] overlay module found
	I1129 10:23:24.522278  510582 out.go:179] * Using the docker driver based on existing profile
	I1129 10:23:24.525120  510582 start.go:309] selected driver: docker
	I1129 10:23:24.525138  510582 start.go:927] validating driver "docker" against &{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:24.525239  510582 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:23:24.525907  510582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:24.635064  510582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:23:24.624852127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:24.635418  510582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:23:24.635452  510582 cni.go:84] Creating CNI manager for ""
	I1129 10:23:24.635516  510582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:24.635567  510582 start.go:353] cluster config:
	{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:24.640561  510582 out.go:179] * Starting "no-preload-949993" primary control-plane node in "no-preload-949993" cluster
	I1129 10:23:24.643341  510582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:23:24.646345  510582 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:23:24.649223  510582 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:24.649366  510582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:23:24.649692  510582 cache.go:107] acquiring lock: {Name:mk7e036f21c3fa53998769ec8ca8e9d0cc43797a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.649767  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 10:23:24.649776  510582 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.071µs
	I1129 10:23:24.649788  510582 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 10:23:24.649800  510582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:23:24.650013  510582 cache.go:107] acquiring lock: {Name:mkec0dc08372453f12658d7249505bdb38e0468a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650140  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 10:23:24.650153  510582 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 145.208µs
	I1129 10:23:24.650160  510582 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 10:23:24.650183  510582 cache.go:107] acquiring lock: {Name:mk55e5c5c1d216b13668659dfb1a1298483fe357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650228  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 10:23:24.650234  510582 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 53.531µs
	I1129 10:23:24.650240  510582 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 10:23:24.650250  510582 cache.go:107] acquiring lock: {Name:mk79de74aa677651359631e14e64f02dbae72c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650278  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 10:23:24.650283  510582 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 34.487µs
	I1129 10:23:24.650289  510582 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 10:23:24.650298  510582 cache.go:107] acquiring lock: {Name:mk3420fbe5609e73633731fff1b3352eed3a8d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650322  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 10:23:24.650327  510582 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.573µs
	I1129 10:23:24.650333  510582 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 10:23:24.650348  510582 cache.go:107] acquiring lock: {Name:mkc2341e09a949f9273b1d33b0a3b4021526fa7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650378  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 10:23:24.650383  510582 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.472µs
	I1129 10:23:24.650388  510582 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 10:23:24.650397  510582 cache.go:107] acquiring lock: {Name:mkb12ce0a127601415f42976e337ea76e82915af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650520  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1129 10:23:24.650532  510582 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 134.861µs
	I1129 10:23:24.650539  510582 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 10:23:24.650574  510582 cache.go:107] acquiring lock: {Name:mk0167a0bfcd689b945be8d473d2efef87ce9fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650609  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 10:23:24.650614  510582 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 43.085µs
	I1129 10:23:24.650627  510582 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 10:23:24.650634  510582 cache.go:87] Successfully saved all images to host disk.
	I1129 10:23:24.685204  510582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:23:24.685223  510582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:23:24.685237  510582 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:23:24.685268  510582 start.go:360] acquireMachinesLock for no-preload-949993: {Name:mk6ff94a11813e006c209466e9cbb5aadf7ae1bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.685314  510582 start.go:364] duration metric: took 32.796µs to acquireMachinesLock for "no-preload-949993"
	I1129 10:23:24.685333  510582 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:23:24.685338  510582 fix.go:54] fixHost starting: 
	I1129 10:23:24.685582  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:24.702230  510582 fix.go:112] recreateIfNeeded on no-preload-949993: state=Stopped err=<nil>
	W1129 10:23:24.702266  510582 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:23:23.758317  507966 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:23:23.758401  507966 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:23:24.783441  507966 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:23:25.344447  507966 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 10:23:25.588671  507966 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:23:26.241886  507966 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:23:26.665337  507966 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:23:26.666189  507966 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:23:26.669919  507966 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 10:23:26.672450  507966 out.go:252]   - Booting up control plane ...
	I1129 10:23:26.672562  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:23:26.672660  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:23:26.673289  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:23:26.693850  507966 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:23:26.693960  507966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 10:23:26.703944  507966 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 10:23:26.704045  507966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:23:26.704084  507966 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:23:26.854967  507966 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 10:23:26.855128  507966 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 10:23:27.856349  507966 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00179272s
	I1129 10:23:27.859935  507966 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 10:23:27.860265  507966 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1129 10:23:27.860364  507966 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 10:23:27.860445  507966 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 10:23:24.705502  510582 out.go:252] * Restarting existing docker container for "no-preload-949993" ...
	I1129 10:23:24.705587  510582 cli_runner.go:164] Run: docker start no-preload-949993
	I1129 10:23:25.023675  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:25.045186  510582 kic.go:430] container "no-preload-949993" state is running.
	I1129 10:23:25.045565  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:25.083414  510582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:23:25.083658  510582 machine.go:94] provisionDockerMachine start ...
	I1129 10:23:25.083731  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:25.111287  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:25.111617  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:25.111633  510582 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:23:25.114058  510582 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 10:23:28.290462  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:23:28.290551  510582 ubuntu.go:182] provisioning hostname "no-preload-949993"
	I1129 10:23:28.290661  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.319815  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:28.320120  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:28.320131  510582 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-949993 && echo "no-preload-949993" | sudo tee /etc/hostname
	I1129 10:23:28.514673  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:23:28.514831  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.546204  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:28.546531  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:28.546547  510582 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-949993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-949993/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-949993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:23:28.722758  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:23:28.722825  510582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:23:28.722873  510582 ubuntu.go:190] setting up certificates
	I1129 10:23:28.722920  510582 provision.go:84] configureAuth start
	I1129 10:23:28.723001  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:28.747313  510582 provision.go:143] copyHostCerts
	I1129 10:23:28.747381  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:23:28.747394  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:23:28.747471  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:23:28.747565  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:23:28.747576  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:23:28.747601  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:23:28.747692  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:23:28.747697  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:23:28.747720  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:23:28.747771  510582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.no-preload-949993 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-949993]
	I1129 10:23:28.975038  510582 provision.go:177] copyRemoteCerts
	I1129 10:23:28.975112  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:23:28.975156  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.993793  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:29.108273  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:23:29.132875  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:23:29.155284  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:23:29.177774  510582 provision.go:87] duration metric: took 454.822245ms to configureAuth
	I1129 10:23:29.177842  510582 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:23:29.178060  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:29.178232  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.215501  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:29.215806  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:29.215820  510582 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:23:29.720457  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:23:29.720476  510582 machine.go:97] duration metric: took 4.636809496s to provisionDockerMachine
	I1129 10:23:29.720488  510582 start.go:293] postStartSetup for "no-preload-949993" (driver="docker")
	I1129 10:23:29.720500  510582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:23:29.720580  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:23:29.720624  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.750233  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:29.879484  510582 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:23:29.890502  510582 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:23:29.890528  510582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:23:29.890540  510582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:23:29.890595  510582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:23:29.890671  510582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:23:29.890774  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:23:29.905057  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:23:29.930848  510582 start.go:296] duration metric: took 210.345457ms for postStartSetup
	I1129 10:23:29.930993  510582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:23:29.931069  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.971652  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.096636  510582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:23:30.102671  510582 fix.go:56] duration metric: took 5.41732648s for fixHost
	I1129 10:23:30.102702  510582 start.go:83] releasing machines lock for "no-preload-949993", held for 5.417379174s
	I1129 10:23:30.102796  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:30.142397  510582 ssh_runner.go:195] Run: cat /version.json
	I1129 10:23:30.142457  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:30.142723  510582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:23:30.142778  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:30.175555  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.184560  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.424237  510582 ssh_runner.go:195] Run: systemctl --version
	I1129 10:23:30.433934  510582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:23:30.496866  510582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:23:30.502557  510582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:23:30.502631  510582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:23:30.517425  510582 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:23:30.517452  510582 start.go:496] detecting cgroup driver to use...
	I1129 10:23:30.517485  510582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:23:30.517555  510582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:23:30.541820  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:23:30.562240  510582 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:23:30.562306  510582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:23:30.595997  510582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:23:30.617442  510582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:23:30.838520  510582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:23:31.010539  510582 docker.go:234] disabling docker service ...
	I1129 10:23:31.010613  510582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:23:31.029478  510582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:23:31.045443  510582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:23:31.298555  510582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:23:31.483563  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:23:31.507844  510582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:23:31.536839  510582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:23:31.536921  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.554849  510582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:23:31.554919  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.574580  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.583353  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.595654  510582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:23:31.609018  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.623256  510582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.636414  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.649004  510582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:23:31.663145  510582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:23:31.676354  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:31.912061  510582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:23:32.189319  510582 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:23:32.189401  510582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:23:32.193961  510582 start.go:564] Will wait 60s for crictl version
	I1129 10:23:32.194026  510582 ssh_runner.go:195] Run: which crictl
	I1129 10:23:32.197754  510582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:23:32.272299  510582 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:23:32.272396  510582 ssh_runner.go:195] Run: crio --version
	I1129 10:23:32.325801  510582 ssh_runner.go:195] Run: crio --version
	I1129 10:23:32.377241  510582 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:23:32.380119  510582 cli_runner.go:164] Run: docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:23:32.402383  510582 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:23:32.406539  510582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:23:32.416708  510582 kubeadm.go:884] updating cluster {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:23:32.416836  510582 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:32.416887  510582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:23:32.453372  510582 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:23:32.453393  510582 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:23:32.453401  510582 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:23:32.453494  510582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-949993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:23:32.453574  510582 ssh_runner.go:195] Run: crio config
	I1129 10:23:32.518176  510582 cni.go:84] Creating CNI manager for ""
	I1129 10:23:32.518198  510582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:32.518217  510582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:23:32.518241  510582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-949993 NodeName:no-preload-949993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:23:32.518373  510582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-949993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:23:32.518452  510582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:23:32.527111  510582 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:23:32.527191  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:23:32.535301  510582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:23:32.549602  510582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:23:32.568956  510582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 10:23:32.585385  510582 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:23:32.589390  510582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:23:32.599435  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:32.799936  510582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:32.835409  510582 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993 for IP: 192.168.76.2
	I1129 10:23:32.835433  510582 certs.go:195] generating shared ca certs ...
	I1129 10:23:32.835450  510582 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:32.835590  510582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:23:32.835643  510582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:23:32.835655  510582 certs.go:257] generating profile certs ...
	I1129 10:23:32.835750  510582 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key
	I1129 10:23:32.835832  510582 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f
	I1129 10:23:32.835877  510582 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key
	I1129 10:23:32.835996  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:23:32.836031  510582 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:23:32.836047  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:23:32.836081  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:23:32.836111  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:23:32.836139  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:23:32.836186  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:23:32.843733  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:23:32.895544  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:23:32.935981  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:23:33.000104  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:23:33.047922  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:23:33.103716  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:23:33.136749  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:23:33.187422  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:23:33.240791  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:23:33.272544  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:23:33.307178  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:23:33.345546  510582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:23:33.368081  510582 ssh_runner.go:195] Run: openssl version
	I1129 10:23:33.379046  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:23:33.394404  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.401376  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.401447  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.458958  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:23:33.468397  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:23:33.485859  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.490215  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.490285  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.545511  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:23:33.556597  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:23:33.573103  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.577547  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.577617  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.619056  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:23:33.627933  510582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:23:33.632229  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:23:33.682188  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:23:33.768325  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:23:33.867107  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:23:33.990455  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:23:34.173647  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:23:34.314431  510582 kubeadm.go:401] StartCluster: {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:34.314531  510582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:23:34.314601  510582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:23:34.429882  510582 cri.go:89] found id: "0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d"
	I1129 10:23:34.429906  510582 cri.go:89] found id: "3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7"
	I1129 10:23:34.429918  510582 cri.go:89] found id: "e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4"
	I1129 10:23:34.429922  510582 cri.go:89] found id: "81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a"
	I1129 10:23:34.429925  510582 cri.go:89] found id: ""
	I1129 10:23:34.429976  510582 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:23:34.462602  510582 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:23:34Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:23:34.462694  510582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:23:34.493028  510582 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:23:34.493050  510582 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:23:34.493109  510582 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:23:34.514916  510582 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:23:34.515323  510582 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-949993" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:34.515444  510582 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-949993" cluster setting kubeconfig missing "no-preload-949993" context setting]
	I1129 10:23:34.515707  510582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.517173  510582 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:23:34.538453  510582 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 10:23:34.538487  510582 kubeadm.go:602] duration metric: took 45.43138ms to restartPrimaryControlPlane
	I1129 10:23:34.538499  510582 kubeadm.go:403] duration metric: took 224.080463ms to StartCluster
	I1129 10:23:34.538514  510582 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.538584  510582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:34.539276  510582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.539485  510582 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:23:34.539835  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:34.539895  510582 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:23:34.539971  510582 addons.go:70] Setting storage-provisioner=true in profile "no-preload-949993"
	I1129 10:23:34.539990  510582 addons.go:239] Setting addon storage-provisioner=true in "no-preload-949993"
	W1129 10:23:34.539996  510582 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:23:34.540021  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.540557  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.540820  510582 addons.go:70] Setting dashboard=true in profile "no-preload-949993"
	I1129 10:23:34.540842  510582 addons.go:239] Setting addon dashboard=true in "no-preload-949993"
	W1129 10:23:34.540851  510582 addons.go:248] addon dashboard should already be in state true
	I1129 10:23:34.540875  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.541271  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.544485  510582 out.go:179] * Verifying Kubernetes components...
	I1129 10:23:34.545062  510582 addons.go:70] Setting default-storageclass=true in profile "no-preload-949993"
	I1129 10:23:34.545264  510582 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-949993"
	I1129 10:23:34.545663  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.554267  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:34.602008  510582 addons.go:239] Setting addon default-storageclass=true in "no-preload-949993"
	W1129 10:23:34.602031  510582 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:23:34.602055  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.603326  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.615053  510582 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:23:34.616089  510582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:23:34.629458  510582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:34.629489  510582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:23:34.629503  510582 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:23:35.259018  507966 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.398524551s
	I1129 10:23:36.911024  507966 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.051014652s
	I1129 10:23:37.862258  507966 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002004509s
	I1129 10:23:37.890551  507966 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:23:37.906864  507966 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:23:37.927922  507966 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:23:37.928146  507966 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-194354 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:23:37.947269  507966 kubeadm.go:319] [bootstrap-token] Using token: da5774.b3xvqvayofuxejdl
	I1129 10:23:37.950242  507966 out.go:252]   - Configuring RBAC rules ...
	I1129 10:23:37.950371  507966 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:23:37.956917  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:23:37.973044  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:23:37.978328  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:23:37.985481  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:23:37.993861  507966 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:23:38.271786  507966 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:23:38.834511  507966 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:23:39.269570  507966 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:23:39.270926  507966 kubeadm.go:319] 
	I1129 10:23:39.271005  507966 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:23:39.271011  507966 kubeadm.go:319] 
	I1129 10:23:39.271084  507966 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:23:39.271088  507966 kubeadm.go:319] 
	I1129 10:23:39.271112  507966 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:23:39.271167  507966 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:23:39.271215  507966 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:23:39.271219  507966 kubeadm.go:319] 
	I1129 10:23:39.271270  507966 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:23:39.271274  507966 kubeadm.go:319] 
	I1129 10:23:39.271324  507966 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:23:39.271328  507966 kubeadm.go:319] 
	I1129 10:23:39.271377  507966 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:23:39.271447  507966 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:23:39.271511  507966 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:23:39.271515  507966 kubeadm.go:319] 
	I1129 10:23:39.271595  507966 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:23:39.271667  507966 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:23:39.271671  507966 kubeadm.go:319] 
	I1129 10:23:39.274443  507966 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token da5774.b3xvqvayofuxejdl \
	I1129 10:23:39.274618  507966 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:23:39.274665  507966 kubeadm.go:319] 	--control-plane 
	I1129 10:23:39.274685  507966 kubeadm.go:319] 
	I1129 10:23:39.274790  507966 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:23:39.274824  507966 kubeadm.go:319] 
	I1129 10:23:39.274937  507966 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token da5774.b3xvqvayofuxejdl \
	I1129 10:23:39.275069  507966 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:23:39.277245  507966 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 10:23:39.277463  507966 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:23:39.277563  507966 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:23:39.277578  507966 cni.go:84] Creating CNI manager for ""
	I1129 10:23:39.277584  507966 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:39.280958  507966 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 10:23:34.629562  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.632538  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:23:34.632566  510582 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:23:34.632643  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.659316  510582 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:34.659338  510582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:23:34.659401  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.675695  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:34.700605  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:34.711421  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:35.019252  510582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:35.053107  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:35.085381  510582 node_ready.go:35] waiting up to 6m0s for node "no-preload-949993" to be "Ready" ...
	I1129 10:23:35.107451  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:35.115826  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:23:35.115909  510582 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:23:35.181608  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:23:35.181686  510582 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:23:35.362957  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:23:35.363035  510582 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:23:35.522529  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:23:35.522606  510582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:23:35.587227  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:23:35.587306  510582 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:23:35.636645  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:23:35.636719  510582 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:23:35.670183  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:23:35.670257  510582 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:23:35.711342  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:23:35.711415  510582 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:23:35.762502  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:23:35.762579  510582 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:23:35.803385  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:23:39.283847  507966 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:23:39.288953  507966 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 10:23:39.289014  507966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:23:39.312282  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:23:39.959493  507966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:23:39.959625  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:39.959682  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-194354 minikube.k8s.io/updated_at=2025_11_29T10_23_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=default-k8s-diff-port-194354 minikube.k8s.io/primary=true
	I1129 10:23:40.361401  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:40.361495  507966 ops.go:34] apiserver oom_adj: -16
	I1129 10:23:40.861451  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.361873  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.862097  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:42.362096  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:42.861458  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.384883  510582 node_ready.go:49] node "no-preload-949993" is "Ready"
	I1129 10:23:41.384912  510582 node_ready.go:38] duration metric: took 6.299436812s for node "no-preload-949993" to be "Ready" ...
	I1129 10:23:41.384926  510582 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:23:41.384987  510582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:23:41.624086  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.570863747s)
	I1129 10:23:43.322773  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.215226839s)
	I1129 10:23:43.322905  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.519441572s)
	I1129 10:23:43.323071  510582 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.938072916s)
	I1129 10:23:43.323086  510582 api_server.go:72] duration metric: took 8.783570641s to wait for apiserver process to appear ...
	I1129 10:23:43.323092  510582 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:23:43.323109  510582 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:23:43.326153  510582 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-949993 addons enable metrics-server
	
	I1129 10:23:43.329073  510582 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1129 10:23:43.362405  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:43.861894  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:44.020361  507966 kubeadm.go:1114] duration metric: took 4.060786248s to wait for elevateKubeSystemPrivileges
	I1129 10:23:44.020470  507966 kubeadm.go:403] duration metric: took 25.977646325s to StartCluster
	I1129 10:23:44.020504  507966 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:44.020633  507966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:44.021727  507966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:44.022131  507966 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:23:44.022387  507966 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:44.022439  507966 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:23:44.022503  507966 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-194354"
	I1129 10:23:44.022516  507966 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-194354"
	I1129 10:23:44.022540  507966 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:23:44.023006  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.022156  507966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:23:44.023473  507966 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-194354"
	I1129 10:23:44.023496  507966 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-194354"
	I1129 10:23:44.023827  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.026776  507966 out.go:179] * Verifying Kubernetes components...
	I1129 10:23:44.034308  507966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:44.060803  507966 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-194354"
	I1129 10:23:44.060841  507966 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:23:44.061279  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.086997  507966 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:23:43.332070  510582 addons.go:530] duration metric: took 8.792168658s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1129 10:23:43.332598  510582 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:23:43.334000  510582 api_server.go:141] control plane version: v1.34.1
	I1129 10:23:43.334019  510582 api_server.go:131] duration metric: took 10.921442ms to wait for apiserver health ...
	I1129 10:23:43.334028  510582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:23:43.338676  510582 system_pods.go:59] 8 kube-system pods found
	I1129 10:23:43.338760  510582 system_pods.go:61] "coredns-66bc5c9577-vcgbt" [52333222-cd4d-4c66-aa3e-1aa0fa9e1078] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:23:43.338788  510582 system_pods.go:61] "etcd-no-preload-949993" [bb193cc4-411c-4510-b2a7-b0b8addac524] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:23:43.338829  510582 system_pods.go:61] "kindnet-jxmnq" [fb632bfa-f7ff-459c-8b50-8213e1d36462] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:23:43.338864  510582 system_pods.go:61] "kube-apiserver-no-preload-949993" [5c425dd3-47dc-407c-bc55-901fe9865e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:23:43.338896  510582 system_pods.go:61] "kube-controller-manager-no-preload-949993" [3790d691-d776-4601-a6bc-b18bc83000ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:23:43.338920  510582 system_pods.go:61] "kube-proxy-ffl4g" [f62b4d17-773c-4a38-ba6c-4ac103f38b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:23:43.338953  510582 system_pods.go:61] "kube-scheduler-no-preload-949993" [f54fa329-43bd-4885-b598-cefa9e6f1e0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:23:43.338982  510582 system_pods.go:61] "storage-provisioner" [b85d010c-01c5-42c7-83b9-578437039e17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:23:43.339005  510582 system_pods.go:74] duration metric: took 4.970521ms to wait for pod list to return data ...
	I1129 10:23:43.339026  510582 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:23:43.346685  510582 default_sa.go:45] found service account: "default"
	I1129 10:23:43.346710  510582 default_sa.go:55] duration metric: took 7.663155ms for default service account to be created ...
	I1129 10:23:43.346720  510582 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:23:43.352409  510582 system_pods.go:86] 8 kube-system pods found
	I1129 10:23:43.352440  510582 system_pods.go:89] "coredns-66bc5c9577-vcgbt" [52333222-cd4d-4c66-aa3e-1aa0fa9e1078] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:23:43.352450  510582 system_pods.go:89] "etcd-no-preload-949993" [bb193cc4-411c-4510-b2a7-b0b8addac524] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:23:43.352458  510582 system_pods.go:89] "kindnet-jxmnq" [fb632bfa-f7ff-459c-8b50-8213e1d36462] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:23:43.352465  510582 system_pods.go:89] "kube-apiserver-no-preload-949993" [5c425dd3-47dc-407c-bc55-901fe9865e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:23:43.352471  510582 system_pods.go:89] "kube-controller-manager-no-preload-949993" [3790d691-d776-4601-a6bc-b18bc83000ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:23:43.352477  510582 system_pods.go:89] "kube-proxy-ffl4g" [f62b4d17-773c-4a38-ba6c-4ac103f38b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:23:43.352483  510582 system_pods.go:89] "kube-scheduler-no-preload-949993" [f54fa329-43bd-4885-b598-cefa9e6f1e0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:23:43.352496  510582 system_pods.go:89] "storage-provisioner" [b85d010c-01c5-42c7-83b9-578437039e17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:23:43.352503  510582 system_pods.go:126] duration metric: took 5.777307ms to wait for k8s-apps to be running ...
	I1129 10:23:43.352513  510582 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:23:43.352571  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:23:43.446329  510582 system_svc.go:56] duration metric: took 93.806221ms WaitForService to wait for kubelet
	I1129 10:23:43.446362  510582 kubeadm.go:587] duration metric: took 8.906844286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:23:43.446385  510582 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:23:43.468875  510582 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:23:43.468912  510582 node_conditions.go:123] node cpu capacity is 2
	I1129 10:23:43.468925  510582 node_conditions.go:105] duration metric: took 22.533969ms to run NodePressure ...
	I1129 10:23:43.468937  510582 start.go:242] waiting for startup goroutines ...
	I1129 10:23:43.468956  510582 start.go:247] waiting for cluster config update ...
	I1129 10:23:43.468968  510582 start.go:256] writing updated cluster config ...
	I1129 10:23:43.469217  510582 ssh_runner.go:195] Run: rm -f paused
	I1129 10:23:43.474670  510582 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:23:43.483215  510582 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vcgbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:23:44.090345  507966 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:44.090370  507966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:23:44.090450  507966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:23:44.100874  507966 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:44.100899  507966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:23:44.100968  507966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:23:44.132523  507966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:23:44.143267  507966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:23:44.401252  507966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:23:44.431702  507966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:44.535956  507966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:44.591821  507966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:45.161227  507966 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1129 10:23:45.162334  507966 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:23:45.690965  507966 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-194354" context rescaled to 1 replicas
	I1129 10:23:45.753803  507966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217763289s)
	I1129 10:23:45.753872  507966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161979524s)
	I1129 10:23:45.770491  507966 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 10:23:45.773465  507966 addons.go:530] duration metric: took 1.751018181s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1129 10:23:47.165805  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:45.542622  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:47.989562  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:49.666635  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:52.166177  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:49.990541  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:51.990931  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:54.666022  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:57.166497  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:54.489879  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:56.497926  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:58.989190  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:59.665270  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:02.165677  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:00.990865  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:03.488362  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:04.166335  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:06.665750  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:05.988397  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:07.989472  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:08.666205  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:11.166292  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:10.488633  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:12.489841  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:13.665315  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:15.665945  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:18.165215  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:14.989495  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:17.494318  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	I1129 10:24:18.988496  510582 pod_ready.go:94] pod "coredns-66bc5c9577-vcgbt" is "Ready"
	I1129 10:24:18.988523  510582 pod_ready.go:86] duration metric: took 35.505241752s for pod "coredns-66bc5c9577-vcgbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.991259  510582 pod_ready.go:83] waiting for pod "etcd-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.996288  510582 pod_ready.go:94] pod "etcd-no-preload-949993" is "Ready"
	I1129 10:24:18.996320  510582 pod_ready.go:86] duration metric: took 5.032577ms for pod "etcd-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.998454  510582 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.003569  510582 pod_ready.go:94] pod "kube-apiserver-no-preload-949993" is "Ready"
	I1129 10:24:19.003603  510582 pod_ready.go:86] duration metric: took 5.120225ms for pod "kube-apiserver-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.006932  510582 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.186848  510582 pod_ready.go:94] pod "kube-controller-manager-no-preload-949993" is "Ready"
	I1129 10:24:19.186884  510582 pod_ready.go:86] duration metric: took 179.913065ms for pod "kube-controller-manager-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.386987  510582 pod_ready.go:83] waiting for pod "kube-proxy-ffl4g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.786916  510582 pod_ready.go:94] pod "kube-proxy-ffl4g" is "Ready"
	I1129 10:24:19.786948  510582 pod_ready.go:86] duration metric: took 399.93109ms for pod "kube-proxy-ffl4g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.987148  510582 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:20.386279  510582 pod_ready.go:94] pod "kube-scheduler-no-preload-949993" is "Ready"
	I1129 10:24:20.386307  510582 pod_ready.go:86] duration metric: took 399.132959ms for pod "kube-scheduler-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:20.386321  510582 pod_ready.go:40] duration metric: took 36.911571212s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:20.443695  510582 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:24:20.446728  510582 out.go:179] * Done! kubectl is now configured to use "no-preload-949993" cluster and "default" namespace by default
	W1129 10:24:20.165619  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:22.665117  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:24.665872  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	I1129 10:24:25.667952  507966 node_ready.go:49] node "default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:25.667980  507966 node_ready.go:38] duration metric: took 40.505616922s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:24:25.667993  507966 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:24:25.668053  507966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:24:25.681244  507966 api_server.go:72] duration metric: took 41.65902631s to wait for apiserver process to appear ...
	I1129 10:24:25.681270  507966 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:24:25.681290  507966 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1129 10:24:25.691496  507966 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1129 10:24:25.692557  507966 api_server.go:141] control plane version: v1.34.1
	I1129 10:24:25.692579  507966 api_server.go:131] duration metric: took 11.302624ms to wait for apiserver health ...
	I1129 10:24:25.692588  507966 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:24:25.696448  507966 system_pods.go:59] 8 kube-system pods found
	I1129 10:24:25.696487  507966 system_pods.go:61] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:25.696494  507966 system_pods.go:61] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:25.696501  507966 system_pods.go:61] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:25.696506  507966 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:25.696510  507966 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:25.696516  507966 system_pods.go:61] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:25.696519  507966 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:25.696526  507966 system_pods.go:61] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:25.696532  507966 system_pods.go:74] duration metric: took 3.939058ms to wait for pod list to return data ...
	I1129 10:24:25.696552  507966 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:24:25.699312  507966 default_sa.go:45] found service account: "default"
	I1129 10:24:25.699338  507966 default_sa.go:55] duration metric: took 2.780062ms for default service account to be created ...
	I1129 10:24:25.699349  507966 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:24:25.702253  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:25.702291  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:25.702298  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:25.702305  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:25.702310  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:25.702317  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:25.702322  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:25.702330  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:25.702336  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:25.702357  507966 retry.go:31] will retry after 310.669836ms: missing components: kube-dns
	I1129 10:24:26.019886  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.019926  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:26.019934  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.019940  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.019944  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.019949  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.019954  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.019959  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.019965  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:26.019986  507966 retry.go:31] will retry after 286.170038ms: missing components: kube-dns
	I1129 10:24:26.310844  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.310881  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:26.310888  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.310896  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.310902  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.310907  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.310911  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.310917  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.310927  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:26.310957  507966 retry.go:31] will retry after 343.061865ms: missing components: kube-dns
	I1129 10:24:26.658151  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.658188  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Running
	I1129 10:24:26.658196  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.658201  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.658206  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.658210  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.658214  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.658218  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.658222  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Running
	I1129 10:24:26.658229  507966 system_pods.go:126] duration metric: took 958.87435ms to wait for k8s-apps to be running ...
	I1129 10:24:26.658241  507966 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:24:26.658299  507966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:24:26.671648  507966 system_svc.go:56] duration metric: took 13.39846ms WaitForService to wait for kubelet
	I1129 10:24:26.671679  507966 kubeadm.go:587] duration metric: took 42.649466351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:24:26.671698  507966 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:24:26.674832  507966 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:24:26.674866  507966 node_conditions.go:123] node cpu capacity is 2
	I1129 10:24:26.674881  507966 node_conditions.go:105] duration metric: took 3.178597ms to run NodePressure ...
	I1129 10:24:26.674895  507966 start.go:242] waiting for startup goroutines ...
	I1129 10:24:26.674903  507966 start.go:247] waiting for cluster config update ...
	I1129 10:24:26.674915  507966 start.go:256] writing updated cluster config ...
	I1129 10:24:26.675229  507966 ssh_runner.go:195] Run: rm -f paused
	I1129 10:24:26.679081  507966 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:26.683003  507966 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.688164  507966 pod_ready.go:94] pod "coredns-66bc5c9577-8rvzs" is "Ready"
	I1129 10:24:26.688202  507966 pod_ready.go:86] duration metric: took 5.168069ms for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.690538  507966 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.696711  507966 pod_ready.go:94] pod "etcd-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:26.696743  507966 pod_ready.go:86] duration metric: took 6.17568ms for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.699232  507966 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.704231  507966 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:26.704300  507966 pod_ready.go:86] duration metric: took 5.037541ms for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.706968  507966 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.083799  507966 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:27.083828  507966 pod_ready.go:86] duration metric: took 376.798157ms for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.283531  507966 pod_ready.go:83] waiting for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.683679  507966 pod_ready.go:94] pod "kube-proxy-68szw" is "Ready"
	I1129 10:24:27.683710  507966 pod_ready.go:86] duration metric: took 400.149827ms for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.884031  507966 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:28.283309  507966 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:28.283387  507966 pod_ready.go:86] duration metric: took 399.326474ms for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:28.283410  507966 pod_ready.go:40] duration metric: took 1.60429409s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:28.339155  507966 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:24:28.342238  507966 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-194354" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.233612046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.240618463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.241612239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.257713878Z" level=info msg="Created container 824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579/dashboard-metrics-scraper" id=c6fbbb98-f521-4ba9-9778-0c4a5787a7dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.259871893Z" level=info msg="Starting container: 824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49" id=5034fef7-d071-4bc6-bb7a-a3b84969637e name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.265361074Z" level=info msg="Started container" PID=1635 containerID=824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579/dashboard-metrics-scraper id=5034fef7-d071-4bc6-bb7a-a3b84969637e name=/runtime.v1.RuntimeService/StartContainer sandboxID=01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9
	Nov 29 10:24:17 no-preload-949993 conmon[1633]: conmon 824f7e02c2aee5b2cab4 <ninfo>: container 1635 exited with status 1
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.548248438Z" level=info msg="Removing container: 30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7" id=afc47cdc-c7ce-45d8-94dd-547989d262c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.560648962Z" level=info msg="Error loading conmon cgroup of container 30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7: cgroup deleted" id=afc47cdc-c7ce-45d8-94dd-547989d262c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.56448502Z" level=info msg="Removed container 30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579/dashboard-metrics-scraper" id=afc47cdc-c7ce-45d8-94dd-547989d262c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.123136611Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.130771623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.130947436Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.131022013Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.134627578Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.13478964Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.134862765Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.138537852Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.138720509Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.138797416Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.144552635Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.144589682Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.144613674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.152954827Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.152986983Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	824f7e02c2aee       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   01e053c534f28       dashboard-metrics-scraper-6ffb444bf9-4k579   kubernetes-dashboard
	0d6f7b88b3afc       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago       Running             storage-provisioner         2                   a75afc25ea521       storage-provisioner                          kube-system
	8073d01e01f25       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   adcc00be7d095       kubernetes-dashboard-855c9754f9-gzbxs        kubernetes-dashboard
	7ebed6c6ed7cd       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   55418714bfd8c       coredns-66bc5c9577-vcgbt                     kube-system
	1e1231b95014a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   fd6ffb11b0b92       busybox                                      default
	edde7d522ee76       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   85f00a94aa65c       kube-proxy-ffl4g                             kube-system
	0bd947bb8314f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   6ba28c254fc8f       kindnet-jxmnq                                kube-system
	c37befce33bd3       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago       Exited              storage-provisioner         1                   a75afc25ea521       storage-provisioner                          kube-system
	0b5388cb21027       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a924abc6762b9       kube-controller-manager-no-preload-949993    kube-system
	3bd5c31fef611       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   03cc87cd2cbc2       kube-apiserver-no-preload-949993             kube-system
	e89ae8a5d77cb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   9a1c3be6916ec       etcd-no-preload-949993                       kube-system
	81b1c14d84e48       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   3744a305eb487       kube-scheduler-no-preload-949993             kube-system
	
	
	==> coredns [7ebed6c6ed7cd2d1b7be212ae286908fb6ab40e4e3423dc80536de93d275207c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50898 - 64250 "HINFO IN 6240591624351836929.8030457551214750142. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010494409s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-949993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-949993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-949993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_22_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:22:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-949993
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:24:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-949993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                5439880d-b2ce-4fc8-b8d7-05ac5d12654c
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-vcgbt                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-949993                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-jxmnq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-949993              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-949993     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-ffl4g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-949993              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4k579    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gzbxs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 111s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    118s                 kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 118s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  118s                 kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     118s                 kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           114s                 node-controller  Node no-preload-949993 event: Registered Node no-preload-949993 in Controller
	  Normal   NodeReady                99s                  kubelet          Node no-preload-949993 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node no-preload-949993 event: Registered Node no-preload-949993 in Controller
	
	
	==> dmesg <==
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4] <==
	{"level":"warn","ts":"2025-11-29T10:23:38.477834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.590428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.622720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.648786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.692722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.748121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.824900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.869733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.915782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.969779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.014197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.045153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.099305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.146411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.204632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.235319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.253819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.281238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.311782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.370302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.421367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.469690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.517779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.542637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.722791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:24:35 up  3:07,  0 user,  load average: 4.02, 3.73, 2.83
	Linux no-preload-949993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bd947bb8314f6126db11c7ce0f7f06d2894741d282df841341a8467fadae7c6] <==
	I1129 10:23:42.949471       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:23:42.956094       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:23:42.956301       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:23:42.956343       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:23:42.956382       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:23:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:23:43.120578       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:23:43.120647       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:23:43.120680       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:23:43.121477       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:24:13.120924       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 10:24:13.122157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:24:13.122180       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:24:13.122269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1129 10:24:14.721295       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:24:14.721327       1 metrics.go:72] Registering metrics
	I1129 10:24:14.721381       1 controller.go:711] "Syncing nftables rules"
	I1129 10:24:23.122169       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:24:23.122206       1 main.go:301] handling current node
	I1129 10:24:33.125138       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:24:33.125170       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7] <==
	I1129 10:23:41.416817       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:23:41.461899       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 10:23:41.461999       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 10:23:41.462407       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:23:41.466003       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:23:41.466346       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:23:41.484369       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:23:41.484690       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 10:23:41.487065       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 10:23:41.487079       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 10:23:41.516842       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:23:41.526483       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:23:41.526645       1 cache.go:39] Caches are synced for autoregister controller
	E1129 10:23:41.533802       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:23:41.843883       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:23:42.147970       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:23:42.541765       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:23:42.701735       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:23:42.814581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:23:42.861367       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:23:43.080885       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.157.180"}
	I1129 10:23:43.131207       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.191.48"}
	I1129 10:23:45.683152       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:23:45.784506       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:23:45.832149       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d] <==
	I1129 10:23:45.454227       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 10:23:45.460973       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:23:45.475062       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:23:45.475305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:23:45.475358       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:23:45.475390       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:23:45.475744       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:23:45.475808       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:23:45.492826       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:23:45.493612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:23:45.494403       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:23:45.499401       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:23:45.500258       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:23:45.522302       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:23:45.500345       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:23:45.510195       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 10:23:45.527897       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-949993"
	I1129 10:23:45.527975       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 10:23:45.528052       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 10:23:45.529114       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:23:45.519796       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:23:45.519833       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:23:45.555741       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:23:45.555967       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 10:23:45.562247       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [edde7d522ee76ca987e27608ab5a2d4ac968957b65986bd758dc0841ffba33e2] <==
	I1129 10:23:43.159189       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:23:43.265967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:23:43.378347       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:23:43.378387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:23:43.378491       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:23:43.513410       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:23:43.513533       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:23:43.522355       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:23:43.522712       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:23:43.522972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:23:43.524267       1 config.go:200] "Starting service config controller"
	I1129 10:23:43.524552       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:23:43.524624       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:23:43.524661       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:23:43.524697       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:23:43.524724       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:23:43.525356       1 config.go:309] "Starting node config controller"
	I1129 10:23:43.528682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:23:43.528816       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:23:43.624825       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:23:43.624825       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:23:43.624852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a] <==
	I1129 10:23:36.558053       1 serving.go:386] Generated self-signed cert in-memory
	W1129 10:23:41.270960       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 10:23:41.270999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:23:41.271009       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 10:23:41.271017       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 10:23:41.537860       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:23:41.537889       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:23:41.547716       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:23:41.547846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:23:41.547865       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:23:41.547881       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:23:41.657042       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092123     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/46e9c8ab-d9a3-40ac-8bd8-9451c168f859-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4k579\" (UID: \"46e9c8ab-d9a3-40ac-8bd8-9451c168f859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092183     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7l88\" (UniqueName: \"kubernetes.io/projected/46e9c8ab-d9a3-40ac-8bd8-9451c168f859-kube-api-access-j7l88\") pod \"dashboard-metrics-scraper-6ffb444bf9-4k579\" (UID: \"46e9c8ab-d9a3-40ac-8bd8-9451c168f859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092208     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kn9s\" (UniqueName: \"kubernetes.io/projected/e0aa7948-1813-4a7a-aee7-d516085b2f2a-kube-api-access-2kn9s\") pod \"kubernetes-dashboard-855c9754f9-gzbxs\" (UID: \"e0aa7948-1813-4a7a-aee7-d516085b2f2a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gzbxs"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092234     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e0aa7948-1813-4a7a-aee7-d516085b2f2a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-gzbxs\" (UID: \"e0aa7948-1813-4a7a-aee7-d516085b2f2a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gzbxs"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: W1129 10:23:46.328090     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/crio-01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9 WatchSource:0}: Error finding container 01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9: Status 404 returned error can't find the container with id 01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9
	Nov 29 10:23:48 no-preload-949993 kubelet[771]: I1129 10:23:48.801690     771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 10:23:56 no-preload-949993 kubelet[771]: I1129 10:23:56.487190     771 scope.go:117] "RemoveContainer" containerID="88b93c1c4a1396db911e85839960d9d8f77ac7acea4328b161fc675d9ffa44d3"
	Nov 29 10:23:56 no-preload-949993 kubelet[771]: I1129 10:23:56.526107     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gzbxs" podStartSLOduration=6.728299828 podStartE2EDuration="11.526057553s" podCreationTimestamp="2025-11-29 10:23:45 +0000 UTC" firstStartedPulling="2025-11-29 10:23:46.309243564 +0000 UTC m=+13.485472396" lastFinishedPulling="2025-11-29 10:23:51.107001289 +0000 UTC m=+18.283230121" observedRunningTime="2025-11-29 10:23:51.511191562 +0000 UTC m=+18.687420418" watchObservedRunningTime="2025-11-29 10:23:56.526057553 +0000 UTC m=+23.702286385"
	Nov 29 10:23:57 no-preload-949993 kubelet[771]: I1129 10:23:57.491953     771 scope.go:117] "RemoveContainer" containerID="88b93c1c4a1396db911e85839960d9d8f77ac7acea4328b161fc675d9ffa44d3"
	Nov 29 10:23:57 no-preload-949993 kubelet[771]: I1129 10:23:57.492949     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:23:57 no-preload-949993 kubelet[771]: E1129 10:23:57.493278     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:23:58 no-preload-949993 kubelet[771]: I1129 10:23:58.496459     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:23:58 no-preload-949993 kubelet[771]: E1129 10:23:58.497333     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:06 no-preload-949993 kubelet[771]: I1129 10:24:06.254575     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:24:06 no-preload-949993 kubelet[771]: E1129 10:24:06.254781     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:13 no-preload-949993 kubelet[771]: I1129 10:24:13.531972     771 scope.go:117] "RemoveContainer" containerID="c37befce33bd34467be730cc4f6db56a780f49dc021a34f8f8ee923f16d80c0e"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: I1129 10:24:17.229823     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: I1129 10:24:17.546382     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: I1129 10:24:17.546739     771 scope.go:117] "RemoveContainer" containerID="824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: E1129 10:24:17.546989     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:26 no-preload-949993 kubelet[771]: I1129 10:24:26.254315     771 scope.go:117] "RemoveContainer" containerID="824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49"
	Nov 29 10:24:26 no-preload-949993 kubelet[771]: E1129 10:24:26.254978     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:32 no-preload-949993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:24:32 no-preload-949993 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:24:32 no-preload-949993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8073d01e01f256d04be1a4778a38b492288c905eb68f7f59e9e88869f602b4c9] <==
	2025/11/29 10:23:51 Using namespace: kubernetes-dashboard
	2025/11/29 10:23:51 Using in-cluster config to connect to apiserver
	2025/11/29 10:23:51 Using secret token for csrf signing
	2025/11/29 10:23:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:23:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:23:51 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 10:23:51 Generating JWE encryption key
	2025/11/29 10:23:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:23:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:23:52 Initializing JWE encryption key from synchronized object
	2025/11/29 10:23:52 Creating in-cluster Sidecar client
	2025/11/29 10:23:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:23:52 Serving insecurely on HTTP port: 9090
	2025/11/29 10:24:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:23:51 Starting overwatch
	
	
	==> storage-provisioner [0d6f7b88b3afc61aa717c1348d07d0eac84c75a53fcc5fe33c5fde61127d07c6] <==
	I1129 10:24:13.585866       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:24:13.604428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:24:13.604722       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:24:13.607805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:17.062440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:21.323157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:24.921666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:27.975143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:30.997654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:31.003221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:24:31.003486       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:24:31.003680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-949993_4eeedd45-31bf-40e7-b4d7-a2d94d59333d!
	I1129 10:24:31.004805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad137e71-6950-4dad-a697-38d979710672", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-949993_4eeedd45-31bf-40e7-b4d7-a2d94d59333d became leader
	W1129 10:24:31.011431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:31.024604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:24:31.104661       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-949993_4eeedd45-31bf-40e7-b4d7-a2d94d59333d!
	W1129 10:24:33.027945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:33.032528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:35.036469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:35.046380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c37befce33bd34467be730cc4f6db56a780f49dc021a34f8f8ee923f16d80c0e] <==
	I1129 10:23:42.692259       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:24:12.693694       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949993 -n no-preload-949993
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949993 -n no-preload-949993: exit status 2 (400.480263ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-949993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-949993
helpers_test.go:243: (dbg) docker inspect no-preload-949993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3",
	        "Created": "2025-11-29T10:21:50.556040223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510706,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:23:24.739863264Z",
	            "FinishedAt": "2025-11-29T10:23:23.658337472Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/hosts",
	        "LogPath": "/var/lib/docker/containers/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3-json.log",
	        "Name": "/no-preload-949993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-949993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-949993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3",
	                "LowerDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/446e0603c231c5cf0677b38fb3eb616d8f839a66ec0f4930439f2e8c8a8c9c9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-949993",
	                "Source": "/var/lib/docker/volumes/no-preload-949993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-949993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-949993",
	                "name.minikube.sigs.k8s.io": "no-preload-949993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0184fde988acfcb947a9f2cad32aad001c6f35990995251ab8db8a05779b7731",
	            "SandboxKey": "/var/run/docker/netns/0184fde988ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-949993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:7a:21:e4:92:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62b6fd5a8cb74510b8e0db3c4b4e346db103446743514dcfc437d8e74be8a4c3",
	                    "EndpointID": "6abf395f43fd246877843c5c1540a9d4538e21f557623ca3d5f4c397a9140a94",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-949993",
	                        "01cb8829dafd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993: exit status 2 (419.824164ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-949993 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-949993 logs -n 25: (1.651092936s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                          │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ stop    │ -p embed-certs-708011 --alsologtostderr -v=3                                                                                                                             │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ delete  │ -p cert-expiration-930117                                                                                                                                                │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-259491                                                                                                                                          │ disable-driver-mounts-259491 │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                              │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                             │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	│ delete  │ -p embed-certs-708011                                                                                                                                                    │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:23 UTC │
	│ delete  │ -p embed-certs-708011                                                                                                                                                    │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                              │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                               │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                              │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:23:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:23:24.356728  510582 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:23:24.356918  510582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:24.356945  510582 out.go:374] Setting ErrFile to fd 2...
	I1129 10:23:24.356964  510582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:24.357268  510582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:23:24.357953  510582 out.go:368] Setting JSON to false
	I1129 10:23:24.358973  510582 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11154,"bootTime":1764400651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:23:24.359078  510582 start.go:143] virtualization:  
	I1129 10:23:24.362143  510582 out.go:179] * [no-preload-949993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:23:24.366229  510582 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:23:24.366312  510582 notify.go:221] Checking for updates...
	I1129 10:23:24.370325  510582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:23:24.373233  510582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:24.376196  510582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:23:24.379186  510582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:23:24.382194  510582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:23:24.385561  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:24.386290  510582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:23:24.432374  510582 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:23:24.432489  510582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:24.519043  510582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:23:24.509678937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:24.519145  510582 docker.go:319] overlay module found
	I1129 10:23:24.522278  510582 out.go:179] * Using the docker driver based on existing profile
	I1129 10:23:24.525120  510582 start.go:309] selected driver: docker
	I1129 10:23:24.525138  510582 start.go:927] validating driver "docker" against &{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:24.525239  510582 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:23:24.525907  510582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:24.635064  510582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:23:24.624852127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:24.635418  510582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:23:24.635452  510582 cni.go:84] Creating CNI manager for ""
	I1129 10:23:24.635516  510582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:24.635567  510582 start.go:353] cluster config:
	{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:24.640561  510582 out.go:179] * Starting "no-preload-949993" primary control-plane node in "no-preload-949993" cluster
	I1129 10:23:24.643341  510582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:23:24.646345  510582 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:23:24.649223  510582 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:24.649366  510582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:23:24.649692  510582 cache.go:107] acquiring lock: {Name:mk7e036f21c3fa53998769ec8ca8e9d0cc43797a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.649767  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 10:23:24.649776  510582 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.071µs
	I1129 10:23:24.649788  510582 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 10:23:24.649800  510582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:23:24.650013  510582 cache.go:107] acquiring lock: {Name:mkec0dc08372453f12658d7249505bdb38e0468a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650140  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 10:23:24.650153  510582 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 145.208µs
	I1129 10:23:24.650160  510582 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 10:23:24.650183  510582 cache.go:107] acquiring lock: {Name:mk55e5c5c1d216b13668659dfb1a1298483fe357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650228  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 10:23:24.650234  510582 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 53.531µs
	I1129 10:23:24.650240  510582 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 10:23:24.650250  510582 cache.go:107] acquiring lock: {Name:mk79de74aa677651359631e14e64f02dbae72c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650278  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 10:23:24.650283  510582 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 34.487µs
	I1129 10:23:24.650289  510582 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 10:23:24.650298  510582 cache.go:107] acquiring lock: {Name:mk3420fbe5609e73633731fff1b3352eed3a8d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650322  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 10:23:24.650327  510582 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.573µs
	I1129 10:23:24.650333  510582 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 10:23:24.650348  510582 cache.go:107] acquiring lock: {Name:mkc2341e09a949f9273b1d33b0a3b4021526fa7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650378  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 10:23:24.650383  510582 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.472µs
	I1129 10:23:24.650388  510582 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 10:23:24.650397  510582 cache.go:107] acquiring lock: {Name:mkb12ce0a127601415f42976e337ea76e82915af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650520  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1129 10:23:24.650532  510582 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 134.861µs
	I1129 10:23:24.650539  510582 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 10:23:24.650574  510582 cache.go:107] acquiring lock: {Name:mk0167a0bfcd689b945be8d473d2efef87ce9fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650609  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 10:23:24.650614  510582 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 43.085µs
	I1129 10:23:24.650627  510582 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 10:23:24.650634  510582 cache.go:87] Successfully saved all images to host disk.
	I1129 10:23:24.685204  510582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:23:24.685223  510582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:23:24.685237  510582 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:23:24.685268  510582 start.go:360] acquireMachinesLock for no-preload-949993: {Name:mk6ff94a11813e006c209466e9cbb5aadf7ae1bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.685314  510582 start.go:364] duration metric: took 32.796µs to acquireMachinesLock for "no-preload-949993"
	I1129 10:23:24.685333  510582 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:23:24.685338  510582 fix.go:54] fixHost starting: 
	I1129 10:23:24.685582  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:24.702230  510582 fix.go:112] recreateIfNeeded on no-preload-949993: state=Stopped err=<nil>
	W1129 10:23:24.702266  510582 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:23:23.758317  507966 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:23:23.758401  507966 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:23:24.783441  507966 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:23:25.344447  507966 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 10:23:25.588671  507966 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:23:26.241886  507966 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:23:26.665337  507966 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:23:26.666189  507966 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:23:26.669919  507966 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 10:23:26.672450  507966 out.go:252]   - Booting up control plane ...
	I1129 10:23:26.672562  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:23:26.672660  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:23:26.673289  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:23:26.693850  507966 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:23:26.693960  507966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 10:23:26.703944  507966 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 10:23:26.704045  507966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:23:26.704084  507966 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:23:26.854967  507966 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 10:23:26.855128  507966 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 10:23:27.856349  507966 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00179272s
	I1129 10:23:27.859935  507966 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 10:23:27.860265  507966 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1129 10:23:27.860364  507966 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 10:23:27.860445  507966 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 10:23:24.705502  510582 out.go:252] * Restarting existing docker container for "no-preload-949993" ...
	I1129 10:23:24.705587  510582 cli_runner.go:164] Run: docker start no-preload-949993
	I1129 10:23:25.023675  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:25.045186  510582 kic.go:430] container "no-preload-949993" state is running.
	I1129 10:23:25.045565  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:25.083414  510582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:23:25.083658  510582 machine.go:94] provisionDockerMachine start ...
	I1129 10:23:25.083731  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:25.111287  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:25.111617  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:25.111633  510582 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:23:25.114058  510582 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 10:23:28.290462  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:23:28.290551  510582 ubuntu.go:182] provisioning hostname "no-preload-949993"
	I1129 10:23:28.290661  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.319815  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:28.320120  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:28.320131  510582 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-949993 && echo "no-preload-949993" | sudo tee /etc/hostname
	I1129 10:23:28.514673  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:23:28.514831  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.546204  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:28.546531  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:28.546547  510582 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-949993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-949993/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-949993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:23:28.722758  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:23:28.722825  510582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:23:28.722873  510582 ubuntu.go:190] setting up certificates
	I1129 10:23:28.722920  510582 provision.go:84] configureAuth start
	I1129 10:23:28.723001  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:28.747313  510582 provision.go:143] copyHostCerts
	I1129 10:23:28.747381  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:23:28.747394  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:23:28.747471  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:23:28.747565  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:23:28.747576  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:23:28.747601  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:23:28.747692  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:23:28.747697  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:23:28.747720  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:23:28.747771  510582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.no-preload-949993 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-949993]
	I1129 10:23:28.975038  510582 provision.go:177] copyRemoteCerts
	I1129 10:23:28.975112  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:23:28.975156  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.993793  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:29.108273  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:23:29.132875  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:23:29.155284  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:23:29.177774  510582 provision.go:87] duration metric: took 454.822245ms to configureAuth
	I1129 10:23:29.177842  510582 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:23:29.178060  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:29.178232  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.215501  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:29.215806  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:29.215820  510582 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:23:29.720457  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:23:29.720476  510582 machine.go:97] duration metric: took 4.636809496s to provisionDockerMachine
	I1129 10:23:29.720488  510582 start.go:293] postStartSetup for "no-preload-949993" (driver="docker")
	I1129 10:23:29.720500  510582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:23:29.720580  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:23:29.720624  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.750233  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:29.879484  510582 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:23:29.890502  510582 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:23:29.890528  510582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:23:29.890540  510582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:23:29.890595  510582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:23:29.890671  510582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:23:29.890774  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:23:29.905057  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:23:29.930848  510582 start.go:296] duration metric: took 210.345457ms for postStartSetup
	I1129 10:23:29.930993  510582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:23:29.931069  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.971652  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.096636  510582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:23:30.102671  510582 fix.go:56] duration metric: took 5.41732648s for fixHost
	I1129 10:23:30.102702  510582 start.go:83] releasing machines lock for "no-preload-949993", held for 5.417379174s
	I1129 10:23:30.102796  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:30.142397  510582 ssh_runner.go:195] Run: cat /version.json
	I1129 10:23:30.142457  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:30.142723  510582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:23:30.142778  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:30.175555  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.184560  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.424237  510582 ssh_runner.go:195] Run: systemctl --version
	I1129 10:23:30.433934  510582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:23:30.496866  510582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:23:30.502557  510582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:23:30.502631  510582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:23:30.517425  510582 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:23:30.517452  510582 start.go:496] detecting cgroup driver to use...
	I1129 10:23:30.517485  510582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:23:30.517555  510582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:23:30.541820  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:23:30.562240  510582 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:23:30.562306  510582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:23:30.595997  510582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:23:30.617442  510582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:23:30.838520  510582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:23:31.010539  510582 docker.go:234] disabling docker service ...
	I1129 10:23:31.010613  510582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:23:31.029478  510582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:23:31.045443  510582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:23:31.298555  510582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:23:31.483563  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:23:31.507844  510582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:23:31.536839  510582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:23:31.536921  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.554849  510582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:23:31.554919  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.574580  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.583353  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.595654  510582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:23:31.609018  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.623256  510582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.636414  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.649004  510582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:23:31.663145  510582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:23:31.676354  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:31.912061  510582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:23:32.189319  510582 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:23:32.189401  510582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:23:32.193961  510582 start.go:564] Will wait 60s for crictl version
	I1129 10:23:32.194026  510582 ssh_runner.go:195] Run: which crictl
	I1129 10:23:32.197754  510582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:23:32.272299  510582 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:23:32.272396  510582 ssh_runner.go:195] Run: crio --version
	I1129 10:23:32.325801  510582 ssh_runner.go:195] Run: crio --version
	I1129 10:23:32.377241  510582 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:23:32.380119  510582 cli_runner.go:164] Run: docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:23:32.402383  510582 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:23:32.406539  510582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:23:32.416708  510582 kubeadm.go:884] updating cluster {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:23:32.416836  510582 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:32.416887  510582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:23:32.453372  510582 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:23:32.453393  510582 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:23:32.453401  510582 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:23:32.453494  510582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-949993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:23:32.453574  510582 ssh_runner.go:195] Run: crio config
	I1129 10:23:32.518176  510582 cni.go:84] Creating CNI manager for ""
	I1129 10:23:32.518198  510582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:32.518217  510582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:23:32.518241  510582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-949993 NodeName:no-preload-949993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:23:32.518373  510582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-949993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:23:32.518452  510582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:23:32.527111  510582 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:23:32.527191  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:23:32.535301  510582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:23:32.549602  510582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:23:32.568956  510582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 10:23:32.585385  510582 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:23:32.589390  510582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:23:32.599435  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:32.799936  510582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:32.835409  510582 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993 for IP: 192.168.76.2
	I1129 10:23:32.835433  510582 certs.go:195] generating shared ca certs ...
	I1129 10:23:32.835450  510582 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:32.835590  510582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:23:32.835643  510582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:23:32.835655  510582 certs.go:257] generating profile certs ...
	I1129 10:23:32.835750  510582 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key
	I1129 10:23:32.835832  510582 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f
	I1129 10:23:32.835877  510582 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key
	I1129 10:23:32.835996  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:23:32.836031  510582 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:23:32.836047  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:23:32.836081  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:23:32.836111  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:23:32.836139  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:23:32.836186  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:23:32.843733  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:23:32.895544  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:23:32.935981  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:23:33.000104  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:23:33.047922  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:23:33.103716  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:23:33.136749  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:23:33.187422  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:23:33.240791  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:23:33.272544  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:23:33.307178  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:23:33.345546  510582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:23:33.368081  510582 ssh_runner.go:195] Run: openssl version
	I1129 10:23:33.379046  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:23:33.394404  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.401376  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.401447  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.458958  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:23:33.468397  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:23:33.485859  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.490215  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.490285  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.545511  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:23:33.556597  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:23:33.573103  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.577547  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.577617  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.619056  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:23:33.627933  510582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:23:33.632229  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:23:33.682188  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:23:33.768325  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:23:33.867107  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:23:33.990455  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:23:34.173647  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:23:34.314431  510582 kubeadm.go:401] StartCluster: {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:34.314531  510582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:23:34.314601  510582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:23:34.429882  510582 cri.go:89] found id: "0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d"
	I1129 10:23:34.429906  510582 cri.go:89] found id: "3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7"
	I1129 10:23:34.429918  510582 cri.go:89] found id: "e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4"
	I1129 10:23:34.429922  510582 cri.go:89] found id: "81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a"
	I1129 10:23:34.429925  510582 cri.go:89] found id: ""
	I1129 10:23:34.429976  510582 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:23:34.462602  510582 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:23:34Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:23:34.462694  510582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:23:34.493028  510582 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:23:34.493050  510582 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:23:34.493109  510582 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:23:34.514916  510582 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:23:34.515323  510582 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-949993" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:34.515444  510582 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-949993" cluster setting kubeconfig missing "no-preload-949993" context setting]
	I1129 10:23:34.515707  510582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.517173  510582 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:23:34.538453  510582 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 10:23:34.538487  510582 kubeadm.go:602] duration metric: took 45.43138ms to restartPrimaryControlPlane
	I1129 10:23:34.538499  510582 kubeadm.go:403] duration metric: took 224.080463ms to StartCluster
	I1129 10:23:34.538514  510582 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.538584  510582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:34.539276  510582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.539485  510582 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:23:34.539835  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:34.539895  510582 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:23:34.539971  510582 addons.go:70] Setting storage-provisioner=true in profile "no-preload-949993"
	I1129 10:23:34.539990  510582 addons.go:239] Setting addon storage-provisioner=true in "no-preload-949993"
	W1129 10:23:34.539996  510582 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:23:34.540021  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.540557  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.540820  510582 addons.go:70] Setting dashboard=true in profile "no-preload-949993"
	I1129 10:23:34.540842  510582 addons.go:239] Setting addon dashboard=true in "no-preload-949993"
	W1129 10:23:34.540851  510582 addons.go:248] addon dashboard should already be in state true
	I1129 10:23:34.540875  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.541271  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.544485  510582 out.go:179] * Verifying Kubernetes components...
	I1129 10:23:34.545062  510582 addons.go:70] Setting default-storageclass=true in profile "no-preload-949993"
	I1129 10:23:34.545264  510582 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-949993"
	I1129 10:23:34.545663  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.554267  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:34.602008  510582 addons.go:239] Setting addon default-storageclass=true in "no-preload-949993"
	W1129 10:23:34.602031  510582 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:23:34.602055  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.603326  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.615053  510582 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:23:34.616089  510582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:23:34.629458  510582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:34.629489  510582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:23:34.629503  510582 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:23:35.259018  507966 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.398524551s
	I1129 10:23:36.911024  507966 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.051014652s
	I1129 10:23:37.862258  507966 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002004509s
	I1129 10:23:37.890551  507966 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:23:37.906864  507966 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:23:37.927922  507966 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:23:37.928146  507966 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-194354 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:23:37.947269  507966 kubeadm.go:319] [bootstrap-token] Using token: da5774.b3xvqvayofuxejdl
	I1129 10:23:37.950242  507966 out.go:252]   - Configuring RBAC rules ...
	I1129 10:23:37.950371  507966 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:23:37.956917  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:23:37.973044  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:23:37.978328  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:23:37.985481  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:23:37.993861  507966 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:23:38.271786  507966 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:23:38.834511  507966 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:23:39.269570  507966 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:23:39.270926  507966 kubeadm.go:319] 
	I1129 10:23:39.271005  507966 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:23:39.271011  507966 kubeadm.go:319] 
	I1129 10:23:39.271084  507966 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:23:39.271088  507966 kubeadm.go:319] 
	I1129 10:23:39.271112  507966 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:23:39.271167  507966 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:23:39.271215  507966 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:23:39.271219  507966 kubeadm.go:319] 
	I1129 10:23:39.271270  507966 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:23:39.271274  507966 kubeadm.go:319] 
	I1129 10:23:39.271324  507966 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:23:39.271328  507966 kubeadm.go:319] 
	I1129 10:23:39.271377  507966 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:23:39.271447  507966 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:23:39.271511  507966 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:23:39.271515  507966 kubeadm.go:319] 
	I1129 10:23:39.271595  507966 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:23:39.271667  507966 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:23:39.271671  507966 kubeadm.go:319] 
	I1129 10:23:39.274443  507966 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token da5774.b3xvqvayofuxejdl \
	I1129 10:23:39.274618  507966 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:23:39.274665  507966 kubeadm.go:319] 	--control-plane 
	I1129 10:23:39.274685  507966 kubeadm.go:319] 
	I1129 10:23:39.274790  507966 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:23:39.274824  507966 kubeadm.go:319] 
	I1129 10:23:39.274937  507966 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token da5774.b3xvqvayofuxejdl \
	I1129 10:23:39.275069  507966 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:23:39.277245  507966 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 10:23:39.277463  507966 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:23:39.277563  507966 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:23:39.277578  507966 cni.go:84] Creating CNI manager for ""
	I1129 10:23:39.277584  507966 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:39.280958  507966 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 10:23:34.629562  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.632538  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:23:34.632566  510582 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:23:34.632643  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.659316  510582 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:34.659338  510582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:23:34.659401  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.675695  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:34.700605  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:34.711421  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:35.019252  510582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:35.053107  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:35.085381  510582 node_ready.go:35] waiting up to 6m0s for node "no-preload-949993" to be "Ready" ...
	I1129 10:23:35.107451  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:35.115826  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:23:35.115909  510582 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:23:35.181608  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:23:35.181686  510582 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:23:35.362957  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:23:35.363035  510582 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:23:35.522529  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:23:35.522606  510582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:23:35.587227  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:23:35.587306  510582 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:23:35.636645  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:23:35.636719  510582 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:23:35.670183  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:23:35.670257  510582 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:23:35.711342  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:23:35.711415  510582 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:23:35.762502  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:23:35.762579  510582 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:23:35.803385  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:23:39.283847  507966 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:23:39.288953  507966 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 10:23:39.289014  507966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:23:39.312282  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:23:39.959493  507966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:23:39.959625  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:39.959682  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-194354 minikube.k8s.io/updated_at=2025_11_29T10_23_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=default-k8s-diff-port-194354 minikube.k8s.io/primary=true
	I1129 10:23:40.361401  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:40.361495  507966 ops.go:34] apiserver oom_adj: -16
	I1129 10:23:40.861451  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.361873  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.862097  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:42.362096  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:42.861458  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.384883  510582 node_ready.go:49] node "no-preload-949993" is "Ready"
	I1129 10:23:41.384912  510582 node_ready.go:38] duration metric: took 6.299436812s for node "no-preload-949993" to be "Ready" ...
	I1129 10:23:41.384926  510582 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:23:41.384987  510582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:23:41.624086  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.570863747s)
	I1129 10:23:43.322773  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.215226839s)
	I1129 10:23:43.322905  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.519441572s)
	I1129 10:23:43.323071  510582 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.938072916s)
	I1129 10:23:43.323086  510582 api_server.go:72] duration metric: took 8.783570641s to wait for apiserver process to appear ...
	I1129 10:23:43.323092  510582 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:23:43.323109  510582 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:23:43.326153  510582 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-949993 addons enable metrics-server
	
	I1129 10:23:43.329073  510582 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1129 10:23:43.362405  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:43.861894  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:44.020361  507966 kubeadm.go:1114] duration metric: took 4.060786248s to wait for elevateKubeSystemPrivileges
	I1129 10:23:44.020470  507966 kubeadm.go:403] duration metric: took 25.977646325s to StartCluster
	I1129 10:23:44.020504  507966 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:44.020633  507966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:44.021727  507966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:44.022131  507966 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:23:44.022387  507966 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:44.022439  507966 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:23:44.022503  507966 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-194354"
	I1129 10:23:44.022516  507966 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-194354"
	I1129 10:23:44.022540  507966 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:23:44.023006  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.022156  507966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:23:44.023473  507966 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-194354"
	I1129 10:23:44.023496  507966 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-194354"
	I1129 10:23:44.023827  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.026776  507966 out.go:179] * Verifying Kubernetes components...
	I1129 10:23:44.034308  507966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:44.060803  507966 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-194354"
	I1129 10:23:44.060841  507966 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:23:44.061279  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.086997  507966 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:23:43.332070  510582 addons.go:530] duration metric: took 8.792168658s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1129 10:23:43.332598  510582 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:23:43.334000  510582 api_server.go:141] control plane version: v1.34.1
	I1129 10:23:43.334019  510582 api_server.go:131] duration metric: took 10.921442ms to wait for apiserver health ...
	I1129 10:23:43.334028  510582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:23:43.338676  510582 system_pods.go:59] 8 kube-system pods found
	I1129 10:23:43.338760  510582 system_pods.go:61] "coredns-66bc5c9577-vcgbt" [52333222-cd4d-4c66-aa3e-1aa0fa9e1078] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:23:43.338788  510582 system_pods.go:61] "etcd-no-preload-949993" [bb193cc4-411c-4510-b2a7-b0b8addac524] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:23:43.338829  510582 system_pods.go:61] "kindnet-jxmnq" [fb632bfa-f7ff-459c-8b50-8213e1d36462] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:23:43.338864  510582 system_pods.go:61] "kube-apiserver-no-preload-949993" [5c425dd3-47dc-407c-bc55-901fe9865e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:23:43.338896  510582 system_pods.go:61] "kube-controller-manager-no-preload-949993" [3790d691-d776-4601-a6bc-b18bc83000ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:23:43.338920  510582 system_pods.go:61] "kube-proxy-ffl4g" [f62b4d17-773c-4a38-ba6c-4ac103f38b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:23:43.338953  510582 system_pods.go:61] "kube-scheduler-no-preload-949993" [f54fa329-43bd-4885-b598-cefa9e6f1e0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:23:43.338982  510582 system_pods.go:61] "storage-provisioner" [b85d010c-01c5-42c7-83b9-578437039e17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:23:43.339005  510582 system_pods.go:74] duration metric: took 4.970521ms to wait for pod list to return data ...
	I1129 10:23:43.339026  510582 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:23:43.346685  510582 default_sa.go:45] found service account: "default"
	I1129 10:23:43.346710  510582 default_sa.go:55] duration metric: took 7.663155ms for default service account to be created ...
	I1129 10:23:43.346720  510582 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:23:43.352409  510582 system_pods.go:86] 8 kube-system pods found
	I1129 10:23:43.352440  510582 system_pods.go:89] "coredns-66bc5c9577-vcgbt" [52333222-cd4d-4c66-aa3e-1aa0fa9e1078] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:23:43.352450  510582 system_pods.go:89] "etcd-no-preload-949993" [bb193cc4-411c-4510-b2a7-b0b8addac524] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:23:43.352458  510582 system_pods.go:89] "kindnet-jxmnq" [fb632bfa-f7ff-459c-8b50-8213e1d36462] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:23:43.352465  510582 system_pods.go:89] "kube-apiserver-no-preload-949993" [5c425dd3-47dc-407c-bc55-901fe9865e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:23:43.352471  510582 system_pods.go:89] "kube-controller-manager-no-preload-949993" [3790d691-d776-4601-a6bc-b18bc83000ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:23:43.352477  510582 system_pods.go:89] "kube-proxy-ffl4g" [f62b4d17-773c-4a38-ba6c-4ac103f38b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:23:43.352483  510582 system_pods.go:89] "kube-scheduler-no-preload-949993" [f54fa329-43bd-4885-b598-cefa9e6f1e0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:23:43.352496  510582 system_pods.go:89] "storage-provisioner" [b85d010c-01c5-42c7-83b9-578437039e17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:23:43.352503  510582 system_pods.go:126] duration metric: took 5.777307ms to wait for k8s-apps to be running ...
	I1129 10:23:43.352513  510582 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:23:43.352571  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:23:43.446329  510582 system_svc.go:56] duration metric: took 93.806221ms WaitForService to wait for kubelet
	I1129 10:23:43.446362  510582 kubeadm.go:587] duration metric: took 8.906844286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:23:43.446385  510582 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:23:43.468875  510582 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:23:43.468912  510582 node_conditions.go:123] node cpu capacity is 2
	I1129 10:23:43.468925  510582 node_conditions.go:105] duration metric: took 22.533969ms to run NodePressure ...
	I1129 10:23:43.468937  510582 start.go:242] waiting for startup goroutines ...
	I1129 10:23:43.468956  510582 start.go:247] waiting for cluster config update ...
	I1129 10:23:43.468968  510582 start.go:256] writing updated cluster config ...
	I1129 10:23:43.469217  510582 ssh_runner.go:195] Run: rm -f paused
	I1129 10:23:43.474670  510582 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:23:43.483215  510582 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vcgbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:23:44.090345  507966 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:44.090370  507966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:23:44.090450  507966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:23:44.100874  507966 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:44.100899  507966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:23:44.100968  507966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:23:44.132523  507966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:23:44.143267  507966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:23:44.401252  507966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:23:44.431702  507966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:44.535956  507966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:44.591821  507966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:45.161227  507966 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1129 10:23:45.162334  507966 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:23:45.690965  507966 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-194354" context rescaled to 1 replicas
	I1129 10:23:45.753803  507966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217763289s)
	I1129 10:23:45.753872  507966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161979524s)
	I1129 10:23:45.770491  507966 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 10:23:45.773465  507966 addons.go:530] duration metric: took 1.751018181s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1129 10:23:47.165805  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:45.542622  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:47.989562  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:49.666635  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:52.166177  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:49.990541  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:51.990931  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:54.666022  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:57.166497  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:54.489879  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:56.497926  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:58.989190  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:59.665270  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:02.165677  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:00.990865  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:03.488362  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:04.166335  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:06.665750  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:05.988397  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:07.989472  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:08.666205  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:11.166292  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:10.488633  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:12.489841  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:13.665315  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:15.665945  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:18.165215  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:14.989495  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:17.494318  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	I1129 10:24:18.988496  510582 pod_ready.go:94] pod "coredns-66bc5c9577-vcgbt" is "Ready"
	I1129 10:24:18.988523  510582 pod_ready.go:86] duration metric: took 35.505241752s for pod "coredns-66bc5c9577-vcgbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.991259  510582 pod_ready.go:83] waiting for pod "etcd-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.996288  510582 pod_ready.go:94] pod "etcd-no-preload-949993" is "Ready"
	I1129 10:24:18.996320  510582 pod_ready.go:86] duration metric: took 5.032577ms for pod "etcd-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.998454  510582 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.003569  510582 pod_ready.go:94] pod "kube-apiserver-no-preload-949993" is "Ready"
	I1129 10:24:19.003603  510582 pod_ready.go:86] duration metric: took 5.120225ms for pod "kube-apiserver-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.006932  510582 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.186848  510582 pod_ready.go:94] pod "kube-controller-manager-no-preload-949993" is "Ready"
	I1129 10:24:19.186884  510582 pod_ready.go:86] duration metric: took 179.913065ms for pod "kube-controller-manager-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.386987  510582 pod_ready.go:83] waiting for pod "kube-proxy-ffl4g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.786916  510582 pod_ready.go:94] pod "kube-proxy-ffl4g" is "Ready"
	I1129 10:24:19.786948  510582 pod_ready.go:86] duration metric: took 399.93109ms for pod "kube-proxy-ffl4g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.987148  510582 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:20.386279  510582 pod_ready.go:94] pod "kube-scheduler-no-preload-949993" is "Ready"
	I1129 10:24:20.386307  510582 pod_ready.go:86] duration metric: took 399.132959ms for pod "kube-scheduler-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:20.386321  510582 pod_ready.go:40] duration metric: took 36.911571212s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:20.443695  510582 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:24:20.446728  510582 out.go:179] * Done! kubectl is now configured to use "no-preload-949993" cluster and "default" namespace by default
	W1129 10:24:20.165619  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:22.665117  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:24.665872  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	I1129 10:24:25.667952  507966 node_ready.go:49] node "default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:25.667980  507966 node_ready.go:38] duration metric: took 40.505616922s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:24:25.667993  507966 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:24:25.668053  507966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:24:25.681244  507966 api_server.go:72] duration metric: took 41.65902631s to wait for apiserver process to appear ...
	I1129 10:24:25.681270  507966 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:24:25.681290  507966 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1129 10:24:25.691496  507966 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1129 10:24:25.692557  507966 api_server.go:141] control plane version: v1.34.1
	I1129 10:24:25.692579  507966 api_server.go:131] duration metric: took 11.302624ms to wait for apiserver health ...
	I1129 10:24:25.692588  507966 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:24:25.696448  507966 system_pods.go:59] 8 kube-system pods found
	I1129 10:24:25.696487  507966 system_pods.go:61] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:25.696494  507966 system_pods.go:61] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:25.696501  507966 system_pods.go:61] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:25.696506  507966 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:25.696510  507966 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:25.696516  507966 system_pods.go:61] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:25.696519  507966 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:25.696526  507966 system_pods.go:61] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:25.696532  507966 system_pods.go:74] duration metric: took 3.939058ms to wait for pod list to return data ...
	I1129 10:24:25.696552  507966 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:24:25.699312  507966 default_sa.go:45] found service account: "default"
	I1129 10:24:25.699338  507966 default_sa.go:55] duration metric: took 2.780062ms for default service account to be created ...
	I1129 10:24:25.699349  507966 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:24:25.702253  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:25.702291  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:25.702298  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:25.702305  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:25.702310  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:25.702317  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:25.702322  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:25.702330  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:25.702336  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:25.702357  507966 retry.go:31] will retry after 310.669836ms: missing components: kube-dns
	I1129 10:24:26.019886  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.019926  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:26.019934  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.019940  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.019944  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.019949  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.019954  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.019959  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.019965  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:26.019986  507966 retry.go:31] will retry after 286.170038ms: missing components: kube-dns
	I1129 10:24:26.310844  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.310881  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:26.310888  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.310896  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.310902  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.310907  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.310911  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.310917  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.310927  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:26.310957  507966 retry.go:31] will retry after 343.061865ms: missing components: kube-dns
	I1129 10:24:26.658151  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.658188  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Running
	I1129 10:24:26.658196  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.658201  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.658206  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.658210  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.658214  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.658218  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.658222  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Running
	I1129 10:24:26.658229  507966 system_pods.go:126] duration metric: took 958.87435ms to wait for k8s-apps to be running ...
	I1129 10:24:26.658241  507966 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:24:26.658299  507966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:24:26.671648  507966 system_svc.go:56] duration metric: took 13.39846ms WaitForService to wait for kubelet
	I1129 10:24:26.671679  507966 kubeadm.go:587] duration metric: took 42.649466351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:24:26.671698  507966 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:24:26.674832  507966 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:24:26.674866  507966 node_conditions.go:123] node cpu capacity is 2
	I1129 10:24:26.674881  507966 node_conditions.go:105] duration metric: took 3.178597ms to run NodePressure ...
	I1129 10:24:26.674895  507966 start.go:242] waiting for startup goroutines ...
	I1129 10:24:26.674903  507966 start.go:247] waiting for cluster config update ...
	I1129 10:24:26.674915  507966 start.go:256] writing updated cluster config ...
	I1129 10:24:26.675229  507966 ssh_runner.go:195] Run: rm -f paused
	I1129 10:24:26.679081  507966 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:26.683003  507966 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.688164  507966 pod_ready.go:94] pod "coredns-66bc5c9577-8rvzs" is "Ready"
	I1129 10:24:26.688202  507966 pod_ready.go:86] duration metric: took 5.168069ms for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.690538  507966 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.696711  507966 pod_ready.go:94] pod "etcd-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:26.696743  507966 pod_ready.go:86] duration metric: took 6.17568ms for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.699232  507966 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.704231  507966 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:26.704300  507966 pod_ready.go:86] duration metric: took 5.037541ms for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.706968  507966 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.083799  507966 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:27.083828  507966 pod_ready.go:86] duration metric: took 376.798157ms for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.283531  507966 pod_ready.go:83] waiting for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.683679  507966 pod_ready.go:94] pod "kube-proxy-68szw" is "Ready"
	I1129 10:24:27.683710  507966 pod_ready.go:86] duration metric: took 400.149827ms for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.884031  507966 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:28.283309  507966 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:28.283387  507966 pod_ready.go:86] duration metric: took 399.326474ms for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:28.283410  507966 pod_ready.go:40] duration metric: took 1.60429409s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:28.339155  507966 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:24:28.342238  507966 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-194354" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.233612046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.240618463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.241612239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.257713878Z" level=info msg="Created container 824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579/dashboard-metrics-scraper" id=c6fbbb98-f521-4ba9-9778-0c4a5787a7dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.259871893Z" level=info msg="Starting container: 824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49" id=5034fef7-d071-4bc6-bb7a-a3b84969637e name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.265361074Z" level=info msg="Started container" PID=1635 containerID=824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579/dashboard-metrics-scraper id=5034fef7-d071-4bc6-bb7a-a3b84969637e name=/runtime.v1.RuntimeService/StartContainer sandboxID=01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9
	Nov 29 10:24:17 no-preload-949993 conmon[1633]: conmon 824f7e02c2aee5b2cab4 <ninfo>: container 1635 exited with status 1
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.548248438Z" level=info msg="Removing container: 30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7" id=afc47cdc-c7ce-45d8-94dd-547989d262c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.560648962Z" level=info msg="Error loading conmon cgroup of container 30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7: cgroup deleted" id=afc47cdc-c7ce-45d8-94dd-547989d262c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:24:17 no-preload-949993 crio[650]: time="2025-11-29T10:24:17.56448502Z" level=info msg="Removed container 30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579/dashboard-metrics-scraper" id=afc47cdc-c7ce-45d8-94dd-547989d262c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.123136611Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.130771623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.130947436Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.131022013Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.134627578Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.13478964Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.134862765Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.138537852Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.138720509Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.138797416Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.144552635Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.144589682Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.144613674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.152954827Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:24:23 no-preload-949993 crio[650]: time="2025-11-29T10:24:23.152986983Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	824f7e02c2aee       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   01e053c534f28       dashboard-metrics-scraper-6ffb444bf9-4k579   kubernetes-dashboard
	0d6f7b88b3afc       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   a75afc25ea521       storage-provisioner                          kube-system
	8073d01e01f25       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   adcc00be7d095       kubernetes-dashboard-855c9754f9-gzbxs        kubernetes-dashboard
	7ebed6c6ed7cd       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   55418714bfd8c       coredns-66bc5c9577-vcgbt                     kube-system
	1e1231b95014a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   fd6ffb11b0b92       busybox                                      default
	edde7d522ee76       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   85f00a94aa65c       kube-proxy-ffl4g                             kube-system
	0bd947bb8314f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   6ba28c254fc8f       kindnet-jxmnq                                kube-system
	c37befce33bd3       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   a75afc25ea521       storage-provisioner                          kube-system
	0b5388cb21027       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a924abc6762b9       kube-controller-manager-no-preload-949993    kube-system
	3bd5c31fef611       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   03cc87cd2cbc2       kube-apiserver-no-preload-949993             kube-system
	e89ae8a5d77cb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   9a1c3be6916ec       etcd-no-preload-949993                       kube-system
	81b1c14d84e48       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   3744a305eb487       kube-scheduler-no-preload-949993             kube-system
	
	
	==> coredns [7ebed6c6ed7cd2d1b7be212ae286908fb6ab40e4e3423dc80536de93d275207c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50898 - 64250 "HINFO IN 6240591624351836929.8030457551214750142. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010494409s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-949993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-949993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-949993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_22_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:22:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-949993
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:24:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:24:12 +0000   Sat, 29 Nov 2025 10:22:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-949993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                5439880d-b2ce-4fc8-b8d7-05ac5d12654c
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-vcgbt                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-no-preload-949993                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-jxmnq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-949993              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-949993     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-ffl4g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-949993              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4k579    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gzbxs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 114s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                 node-controller  Node no-preload-949993 event: Registered Node no-preload-949993 in Controller
	  Normal   NodeReady                101s                 kubelet          Node no-preload-949993 status is now: NodeReady
	  Normal   Starting                 64s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)    kubelet          Node no-preload-949993 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)    kubelet          Node no-preload-949993 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)    kubelet          Node no-preload-949993 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                  node-controller  Node no-preload-949993 event: Registered Node no-preload-949993 in Controller
	
	
	==> dmesg <==
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4] <==
	{"level":"warn","ts":"2025-11-29T10:23:38.477834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.590428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.622720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.648786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.692722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.748121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.824900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.869733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.915782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:38.969779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.014197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.045153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.099305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.146411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.204632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.235319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.253819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.281238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.311782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.370302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.421367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.469690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.517779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.542637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:39.722791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:24:38 up  3:07,  0 user,  load average: 4.02, 3.73, 2.83
	Linux no-preload-949993 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bd947bb8314f6126db11c7ce0f7f06d2894741d282df841341a8467fadae7c6] <==
	I1129 10:23:42.949471       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:23:42.956094       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:23:42.956301       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:23:42.956343       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:23:42.956382       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:23:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:23:43.120578       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:23:43.120647       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:23:43.120680       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:23:43.121477       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:24:13.120924       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 10:24:13.122157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:24:13.122180       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:24:13.122269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1129 10:24:14.721295       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:24:14.721327       1 metrics.go:72] Registering metrics
	I1129 10:24:14.721381       1 controller.go:711] "Syncing nftables rules"
	I1129 10:24:23.122169       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:24:23.122206       1 main.go:301] handling current node
	I1129 10:24:33.125138       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 10:24:33.125170       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7] <==
	I1129 10:23:41.416817       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:23:41.461899       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 10:23:41.461999       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 10:23:41.462407       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:23:41.466003       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:23:41.466346       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:23:41.484369       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:23:41.484690       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 10:23:41.487065       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 10:23:41.487079       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 10:23:41.516842       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:23:41.526483       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:23:41.526645       1 cache.go:39] Caches are synced for autoregister controller
	E1129 10:23:41.533802       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:23:41.843883       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:23:42.147970       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:23:42.541765       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:23:42.701735       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:23:42.814581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:23:42.861367       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:23:43.080885       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.157.180"}
	I1129 10:23:43.131207       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.191.48"}
	I1129 10:23:45.683152       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:23:45.784506       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:23:45.832149       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d] <==
	I1129 10:23:45.454227       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 10:23:45.460973       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:23:45.475062       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:23:45.475305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:23:45.475358       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:23:45.475390       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:23:45.475744       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:23:45.475808       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:23:45.492826       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:23:45.493612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:23:45.494403       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:23:45.499401       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:23:45.500258       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:23:45.522302       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:23:45.500345       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:23:45.510195       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 10:23:45.527897       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-949993"
	I1129 10:23:45.527975       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 10:23:45.528052       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 10:23:45.529114       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:23:45.519796       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:23:45.519833       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:23:45.555741       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:23:45.555967       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 10:23:45.562247       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [edde7d522ee76ca987e27608ab5a2d4ac968957b65986bd758dc0841ffba33e2] <==
	I1129 10:23:43.159189       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:23:43.265967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:23:43.378347       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:23:43.378387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:23:43.378491       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:23:43.513410       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:23:43.513533       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:23:43.522355       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:23:43.522712       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:23:43.522972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:23:43.524267       1 config.go:200] "Starting service config controller"
	I1129 10:23:43.524552       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:23:43.524624       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:23:43.524661       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:23:43.524697       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:23:43.524724       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:23:43.525356       1 config.go:309] "Starting node config controller"
	I1129 10:23:43.528682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:23:43.528816       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:23:43.624825       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:23:43.624825       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:23:43.624852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a] <==
	I1129 10:23:36.558053       1 serving.go:386] Generated self-signed cert in-memory
	W1129 10:23:41.270960       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 10:23:41.270999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 10:23:41.271009       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 10:23:41.271017       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 10:23:41.537860       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:23:41.537889       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:23:41.547716       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:23:41.547846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:23:41.547865       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:23:41.547881       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:23:41.657042       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092123     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/46e9c8ab-d9a3-40ac-8bd8-9451c168f859-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4k579\" (UID: \"46e9c8ab-d9a3-40ac-8bd8-9451c168f859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092183     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7l88\" (UniqueName: \"kubernetes.io/projected/46e9c8ab-d9a3-40ac-8bd8-9451c168f859-kube-api-access-j7l88\") pod \"dashboard-metrics-scraper-6ffb444bf9-4k579\" (UID: \"46e9c8ab-d9a3-40ac-8bd8-9451c168f859\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092208     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kn9s\" (UniqueName: \"kubernetes.io/projected/e0aa7948-1813-4a7a-aee7-d516085b2f2a-kube-api-access-2kn9s\") pod \"kubernetes-dashboard-855c9754f9-gzbxs\" (UID: \"e0aa7948-1813-4a7a-aee7-d516085b2f2a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gzbxs"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: I1129 10:23:46.092234     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e0aa7948-1813-4a7a-aee7-d516085b2f2a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-gzbxs\" (UID: \"e0aa7948-1813-4a7a-aee7-d516085b2f2a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gzbxs"
	Nov 29 10:23:46 no-preload-949993 kubelet[771]: W1129 10:23:46.328090     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/01cb8829dafdddca725a31dfe22a64db34626587765ef1daa05686743a5eacd3/crio-01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9 WatchSource:0}: Error finding container 01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9: Status 404 returned error can't find the container with id 01e053c534f28d1d8f9064bf4e171c1b742df96382d62a6d5c25c711ee55f8f9
	Nov 29 10:23:48 no-preload-949993 kubelet[771]: I1129 10:23:48.801690     771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 10:23:56 no-preload-949993 kubelet[771]: I1129 10:23:56.487190     771 scope.go:117] "RemoveContainer" containerID="88b93c1c4a1396db911e85839960d9d8f77ac7acea4328b161fc675d9ffa44d3"
	Nov 29 10:23:56 no-preload-949993 kubelet[771]: I1129 10:23:56.526107     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gzbxs" podStartSLOduration=6.728299828 podStartE2EDuration="11.526057553s" podCreationTimestamp="2025-11-29 10:23:45 +0000 UTC" firstStartedPulling="2025-11-29 10:23:46.309243564 +0000 UTC m=+13.485472396" lastFinishedPulling="2025-11-29 10:23:51.107001289 +0000 UTC m=+18.283230121" observedRunningTime="2025-11-29 10:23:51.511191562 +0000 UTC m=+18.687420418" watchObservedRunningTime="2025-11-29 10:23:56.526057553 +0000 UTC m=+23.702286385"
	Nov 29 10:23:57 no-preload-949993 kubelet[771]: I1129 10:23:57.491953     771 scope.go:117] "RemoveContainer" containerID="88b93c1c4a1396db911e85839960d9d8f77ac7acea4328b161fc675d9ffa44d3"
	Nov 29 10:23:57 no-preload-949993 kubelet[771]: I1129 10:23:57.492949     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:23:57 no-preload-949993 kubelet[771]: E1129 10:23:57.493278     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:23:58 no-preload-949993 kubelet[771]: I1129 10:23:58.496459     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:23:58 no-preload-949993 kubelet[771]: E1129 10:23:58.497333     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:06 no-preload-949993 kubelet[771]: I1129 10:24:06.254575     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:24:06 no-preload-949993 kubelet[771]: E1129 10:24:06.254781     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:13 no-preload-949993 kubelet[771]: I1129 10:24:13.531972     771 scope.go:117] "RemoveContainer" containerID="c37befce33bd34467be730cc4f6db56a780f49dc021a34f8f8ee923f16d80c0e"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: I1129 10:24:17.229823     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: I1129 10:24:17.546382     771 scope.go:117] "RemoveContainer" containerID="30b236775f1eb3c01803e522d1ccb466872de5f06e7d88186a54f652ac7eb2b7"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: I1129 10:24:17.546739     771 scope.go:117] "RemoveContainer" containerID="824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49"
	Nov 29 10:24:17 no-preload-949993 kubelet[771]: E1129 10:24:17.546989     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:26 no-preload-949993 kubelet[771]: I1129 10:24:26.254315     771 scope.go:117] "RemoveContainer" containerID="824f7e02c2aee5b2cab47c3191d24f6196d7ced0e270014e7385e2d5e3c7af49"
	Nov 29 10:24:26 no-preload-949993 kubelet[771]: E1129 10:24:26.254978     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4k579_kubernetes-dashboard(46e9c8ab-d9a3-40ac-8bd8-9451c168f859)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4k579" podUID="46e9c8ab-d9a3-40ac-8bd8-9451c168f859"
	Nov 29 10:24:32 no-preload-949993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:24:32 no-preload-949993 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:24:32 no-preload-949993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8073d01e01f256d04be1a4778a38b492288c905eb68f7f59e9e88869f602b4c9] <==
	2025/11/29 10:23:51 Using namespace: kubernetes-dashboard
	2025/11/29 10:23:51 Using in-cluster config to connect to apiserver
	2025/11/29 10:23:51 Using secret token for csrf signing
	2025/11/29 10:23:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:23:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:23:51 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 10:23:51 Generating JWE encryption key
	2025/11/29 10:23:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:23:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:23:52 Initializing JWE encryption key from synchronized object
	2025/11/29 10:23:52 Creating in-cluster Sidecar client
	2025/11/29 10:23:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:23:52 Serving insecurely on HTTP port: 9090
	2025/11/29 10:24:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:23:51 Starting overwatch
	
	
	==> storage-provisioner [0d6f7b88b3afc61aa717c1348d07d0eac84c75a53fcc5fe33c5fde61127d07c6] <==
	I1129 10:24:13.585866       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:24:13.604428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:24:13.604722       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:24:13.607805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:17.062440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:21.323157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:24.921666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:27.975143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:30.997654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:31.003221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:24:31.003486       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:24:31.003680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-949993_4eeedd45-31bf-40e7-b4d7-a2d94d59333d!
	I1129 10:24:31.004805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad137e71-6950-4dad-a697-38d979710672", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-949993_4eeedd45-31bf-40e7-b4d7-a2d94d59333d became leader
	W1129 10:24:31.011431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:31.024604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:24:31.104661       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-949993_4eeedd45-31bf-40e7-b4d7-a2d94d59333d!
	W1129 10:24:33.027945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:33.032528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:35.036469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:35.046380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:37.048933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:37.057415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c37befce33bd34467be730cc4f6db56a780f49dc021a34f8f8ee923f16d80c0e] <==
	I1129 10:23:42.692259       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:24:12.693694       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949993 -n no-preload-949993
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949993 -n no-preload-949993: exit status 2 (458.022452ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-949993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1129 10:24:36.836606  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (410.002821ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:24:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-194354 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-194354 describe deploy/metrics-server -n kube-system: exit status 1 (99.755645ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-194354 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-194354
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-194354:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88",
	        "Created": "2025-11-29T10:23:08.777622833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 508362,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:23:08.848020208Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/hostname",
	        "HostsPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/hosts",
	        "LogPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88-json.log",
	        "Name": "/default-k8s-diff-port-194354",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-194354:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-194354",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88",
	                "LowerDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-194354",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-194354/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-194354",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-194354",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-194354",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ca62e2563312daa2a1106ba9a97bb19169d22b1d8b48ab44cdafdcdb17ebb77",
	            "SandboxKey": "/var/run/docker/netns/3ca62e256331",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-194354": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:d8:ec:b0:a4:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57a57979b7c8de5b2d73e81501e805dfbd816f410a202f054d691d84e66ed18d",
	                    "EndpointID": "14bf9da4e009748bb3c5e359230479520154f6de857129182754653e5b377018",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-194354",
	                        "4c5ba5cc2474"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-194354 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-194354 logs -n 25: (1.63509246s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-685516 image list --format=json                                                                                                                          │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ pause   │ -p old-k8s-version-685516 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │                     │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p old-k8s-version-685516                                                                                                                                                │ old-k8s-version-685516       │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:19 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:19 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p embed-certs-708011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │                     │
	│ stop    │ -p embed-certs-708011 --alsologtostderr -v=3                                                                                                                             │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ delete  │ -p cert-expiration-930117                                                                                                                                                │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-259491                                                                                                                                          │ disable-driver-mounts-259491 │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                              │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                             │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	│ delete  │ -p embed-certs-708011                                                                                                                                                    │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:23 UTC │
	│ delete  │ -p embed-certs-708011                                                                                                                                                    │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                              │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                               │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                              │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:23:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:23:24.356728  510582 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:23:24.356918  510582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:24.356945  510582 out.go:374] Setting ErrFile to fd 2...
	I1129 10:23:24.356964  510582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:23:24.357268  510582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:23:24.357953  510582 out.go:368] Setting JSON to false
	I1129 10:23:24.358973  510582 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11154,"bootTime":1764400651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:23:24.359078  510582 start.go:143] virtualization:  
	I1129 10:23:24.362143  510582 out.go:179] * [no-preload-949993] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:23:24.366229  510582 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:23:24.366312  510582 notify.go:221] Checking for updates...
	I1129 10:23:24.370325  510582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:23:24.373233  510582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:24.376196  510582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:23:24.379186  510582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:23:24.382194  510582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:23:24.385561  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:24.386290  510582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:23:24.432374  510582 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:23:24.432489  510582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:24.519043  510582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:23:24.509678937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:24.519145  510582 docker.go:319] overlay module found
	I1129 10:23:24.522278  510582 out.go:179] * Using the docker driver based on existing profile
	I1129 10:23:24.525120  510582 start.go:309] selected driver: docker
	I1129 10:23:24.525138  510582 start.go:927] validating driver "docker" against &{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:24.525239  510582 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:23:24.525907  510582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:23:24.635064  510582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:23:24.624852127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:23:24.635418  510582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:23:24.635452  510582 cni.go:84] Creating CNI manager for ""
	I1129 10:23:24.635516  510582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:24.635567  510582 start.go:353] cluster config:
	{Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:24.640561  510582 out.go:179] * Starting "no-preload-949993" primary control-plane node in "no-preload-949993" cluster
	I1129 10:23:24.643341  510582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:23:24.646345  510582 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:23:24.649223  510582 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:24.649366  510582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:23:24.649692  510582 cache.go:107] acquiring lock: {Name:mk7e036f21c3fa53998769ec8ca8e9d0cc43797a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.649767  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 10:23:24.649776  510582 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.071µs
	I1129 10:23:24.649788  510582 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 10:23:24.649800  510582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:23:24.650013  510582 cache.go:107] acquiring lock: {Name:mkec0dc08372453f12658d7249505bdb38e0468a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650140  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 10:23:24.650153  510582 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 145.208µs
	I1129 10:23:24.650160  510582 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 10:23:24.650183  510582 cache.go:107] acquiring lock: {Name:mk55e5c5c1d216b13668659dfb1a1298483fe357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650228  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 10:23:24.650234  510582 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 53.531µs
	I1129 10:23:24.650240  510582 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 10:23:24.650250  510582 cache.go:107] acquiring lock: {Name:mk79de74aa677651359631e14e64f02dbae72c16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650278  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 10:23:24.650283  510582 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 34.487µs
	I1129 10:23:24.650289  510582 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 10:23:24.650298  510582 cache.go:107] acquiring lock: {Name:mk3420fbe5609e73633731fff1b3352eed3a8d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650322  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 10:23:24.650327  510582 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.573µs
	I1129 10:23:24.650333  510582 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 10:23:24.650348  510582 cache.go:107] acquiring lock: {Name:mkc2341e09a949f9273b1d33b0a3b4021526fa7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650378  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 10:23:24.650383  510582 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.472µs
	I1129 10:23:24.650388  510582 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 10:23:24.650397  510582 cache.go:107] acquiring lock: {Name:mkb12ce0a127601415f42976e337ea76e82915af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650520  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1129 10:23:24.650532  510582 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 134.861µs
	I1129 10:23:24.650539  510582 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 10:23:24.650574  510582 cache.go:107] acquiring lock: {Name:mk0167a0bfcd689b945be8d473d2efef87ce9fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.650609  510582 cache.go:115] /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 10:23:24.650614  510582 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 43.085µs
	I1129 10:23:24.650627  510582 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 10:23:24.650634  510582 cache.go:87] Successfully saved all images to host disk.
	I1129 10:23:24.685204  510582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:23:24.685223  510582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:23:24.685237  510582 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:23:24.685268  510582 start.go:360] acquireMachinesLock for no-preload-949993: {Name:mk6ff94a11813e006c209466e9cbb5aadf7ae1bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:23:24.685314  510582 start.go:364] duration metric: took 32.796µs to acquireMachinesLock for "no-preload-949993"
	I1129 10:23:24.685333  510582 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:23:24.685338  510582 fix.go:54] fixHost starting: 
	I1129 10:23:24.685582  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:24.702230  510582 fix.go:112] recreateIfNeeded on no-preload-949993: state=Stopped err=<nil>
	W1129 10:23:24.702266  510582 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:23:23.758317  507966 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:23:23.758401  507966 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:23:24.783441  507966 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:23:25.344447  507966 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 10:23:25.588671  507966 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:23:26.241886  507966 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:23:26.665337  507966 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:23:26.666189  507966 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:23:26.669919  507966 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 10:23:26.672450  507966 out.go:252]   - Booting up control plane ...
	I1129 10:23:26.672562  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:23:26.672660  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:23:26.673289  507966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:23:26.693850  507966 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:23:26.693960  507966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 10:23:26.703944  507966 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 10:23:26.704045  507966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:23:26.704084  507966 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:23:26.854967  507966 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 10:23:26.855128  507966 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 10:23:27.856349  507966 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00179272s
	I1129 10:23:27.859935  507966 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 10:23:27.860265  507966 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1129 10:23:27.860364  507966 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 10:23:27.860445  507966 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 10:23:24.705502  510582 out.go:252] * Restarting existing docker container for "no-preload-949993" ...
	I1129 10:23:24.705587  510582 cli_runner.go:164] Run: docker start no-preload-949993
	I1129 10:23:25.023675  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:25.045186  510582 kic.go:430] container "no-preload-949993" state is running.
	I1129 10:23:25.045565  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:25.083414  510582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/config.json ...
	I1129 10:23:25.083658  510582 machine.go:94] provisionDockerMachine start ...
	I1129 10:23:25.083731  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:25.111287  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:25.111617  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:25.111633  510582 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:23:25.114058  510582 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 10:23:28.290462  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:23:28.290551  510582 ubuntu.go:182] provisioning hostname "no-preload-949993"
	I1129 10:23:28.290661  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.319815  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:28.320120  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:28.320131  510582 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-949993 && echo "no-preload-949993" | sudo tee /etc/hostname
	I1129 10:23:28.514673  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-949993
	
	I1129 10:23:28.514831  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.546204  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:28.546531  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:28.546547  510582 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-949993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-949993/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-949993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:23:28.722758  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:23:28.722825  510582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:23:28.722873  510582 ubuntu.go:190] setting up certificates
	I1129 10:23:28.722920  510582 provision.go:84] configureAuth start
	I1129 10:23:28.723001  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:28.747313  510582 provision.go:143] copyHostCerts
	I1129 10:23:28.747381  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:23:28.747394  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:23:28.747471  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:23:28.747565  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:23:28.747576  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:23:28.747601  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:23:28.747692  510582 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:23:28.747697  510582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:23:28.747720  510582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:23:28.747771  510582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.no-preload-949993 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-949993]
	I1129 10:23:28.975038  510582 provision.go:177] copyRemoteCerts
	I1129 10:23:28.975112  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:23:28.975156  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:28.993793  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:29.108273  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:23:29.132875  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:23:29.155284  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:23:29.177774  510582 provision.go:87] duration metric: took 454.822245ms to configureAuth
	I1129 10:23:29.177842  510582 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:23:29.178060  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:29.178232  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.215501  510582 main.go:143] libmachine: Using SSH client type: native
	I1129 10:23:29.215806  510582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1129 10:23:29.215820  510582 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:23:29.720457  510582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:23:29.720476  510582 machine.go:97] duration metric: took 4.636809496s to provisionDockerMachine
	I1129 10:23:29.720488  510582 start.go:293] postStartSetup for "no-preload-949993" (driver="docker")
	I1129 10:23:29.720500  510582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:23:29.720580  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:23:29.720624  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.750233  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:29.879484  510582 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:23:29.890502  510582 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:23:29.890528  510582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:23:29.890540  510582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:23:29.890595  510582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:23:29.890671  510582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:23:29.890774  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:23:29.905057  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:23:29.930848  510582 start.go:296] duration metric: took 210.345457ms for postStartSetup
	I1129 10:23:29.930993  510582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:23:29.931069  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:29.971652  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.096636  510582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:23:30.102671  510582 fix.go:56] duration metric: took 5.41732648s for fixHost
	I1129 10:23:30.102702  510582 start.go:83] releasing machines lock for "no-preload-949993", held for 5.417379174s
	I1129 10:23:30.102796  510582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-949993
	I1129 10:23:30.142397  510582 ssh_runner.go:195] Run: cat /version.json
	I1129 10:23:30.142457  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:30.142723  510582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:23:30.142778  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:30.175555  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.184560  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:30.424237  510582 ssh_runner.go:195] Run: systemctl --version
	I1129 10:23:30.433934  510582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:23:30.496866  510582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:23:30.502557  510582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:23:30.502631  510582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:23:30.517425  510582 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:23:30.517452  510582 start.go:496] detecting cgroup driver to use...
	I1129 10:23:30.517485  510582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:23:30.517555  510582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:23:30.541820  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:23:30.562240  510582 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:23:30.562306  510582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:23:30.595997  510582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:23:30.617442  510582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:23:30.838520  510582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:23:31.010539  510582 docker.go:234] disabling docker service ...
	I1129 10:23:31.010613  510582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:23:31.029478  510582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:23:31.045443  510582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:23:31.298555  510582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:23:31.483563  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:23:31.507844  510582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:23:31.536839  510582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:23:31.536921  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.554849  510582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:23:31.554919  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.574580  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.583353  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.595654  510582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:23:31.609018  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.623256  510582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.636414  510582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:23:31.649004  510582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:23:31.663145  510582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:23:31.676354  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:31.912061  510582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:23:32.189319  510582 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:23:32.189401  510582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:23:32.193961  510582 start.go:564] Will wait 60s for crictl version
	I1129 10:23:32.194026  510582 ssh_runner.go:195] Run: which crictl
	I1129 10:23:32.197754  510582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:23:32.272299  510582 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:23:32.272396  510582 ssh_runner.go:195] Run: crio --version
	I1129 10:23:32.325801  510582 ssh_runner.go:195] Run: crio --version
	I1129 10:23:32.377241  510582 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:23:32.380119  510582 cli_runner.go:164] Run: docker network inspect no-preload-949993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:23:32.402383  510582 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:23:32.406539  510582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:23:32.416708  510582 kubeadm.go:884] updating cluster {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:23:32.416836  510582 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:23:32.416887  510582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:23:32.453372  510582 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:23:32.453393  510582 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:23:32.453401  510582 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:23:32.453494  510582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-949993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:23:32.453574  510582 ssh_runner.go:195] Run: crio config
	I1129 10:23:32.518176  510582 cni.go:84] Creating CNI manager for ""
	I1129 10:23:32.518198  510582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:32.518217  510582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:23:32.518241  510582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-949993 NodeName:no-preload-949993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:23:32.518373  510582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-949993"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:23:32.518452  510582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:23:32.527111  510582 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:23:32.527191  510582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:23:32.535301  510582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:23:32.549602  510582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:23:32.568956  510582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 10:23:32.585385  510582 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:23:32.589390  510582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:23:32.599435  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:32.799936  510582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:32.835409  510582 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993 for IP: 192.168.76.2
	I1129 10:23:32.835433  510582 certs.go:195] generating shared ca certs ...
	I1129 10:23:32.835450  510582 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:32.835590  510582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:23:32.835643  510582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:23:32.835655  510582 certs.go:257] generating profile certs ...
	I1129 10:23:32.835750  510582 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.key
	I1129 10:23:32.835832  510582 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key.e0168a5f
	I1129 10:23:32.835877  510582 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key
	I1129 10:23:32.835996  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:23:32.836031  510582 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:23:32.836047  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:23:32.836081  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:23:32.836111  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:23:32.836139  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:23:32.836186  510582 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:23:32.843733  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:23:32.895544  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:23:32.935981  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:23:33.000104  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:23:33.047922  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:23:33.103716  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:23:33.136749  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:23:33.187422  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:23:33.240791  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:23:33.272544  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:23:33.307178  510582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:23:33.345546  510582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:23:33.368081  510582 ssh_runner.go:195] Run: openssl version
	I1129 10:23:33.379046  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:23:33.394404  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.401376  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.401447  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:23:33.458958  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:23:33.468397  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:23:33.485859  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.490215  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.490285  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:23:33.545511  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:23:33.556597  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:23:33.573103  510582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.577547  510582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.577617  510582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:23:33.619056  510582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:23:33.627933  510582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:23:33.632229  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:23:33.682188  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:23:33.768325  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:23:33.867107  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:23:33.990455  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:23:34.173647  510582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:23:34.314431  510582 kubeadm.go:401] StartCluster: {Name:no-preload-949993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-949993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:23:34.314531  510582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:23:34.314601  510582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:23:34.429882  510582 cri.go:89] found id: "0b5388cb2102718012f380ff905fff83bbcdfa2c9f1a922490dfa27954d3001d"
	I1129 10:23:34.429906  510582 cri.go:89] found id: "3bd5c31fef611e3342639aee2ad5c0a864ce20a4cb26ddef214a1ca464ac61b7"
	I1129 10:23:34.429918  510582 cri.go:89] found id: "e89ae8a5d77cb4aa16b4bf39542e253a18330e76675273fa941122156d4f92f4"
	I1129 10:23:34.429922  510582 cri.go:89] found id: "81b1c14d84e48c74a10580850c45bd6def9a840eced246be1a55824196ec697a"
	I1129 10:23:34.429925  510582 cri.go:89] found id: ""
	I1129 10:23:34.429976  510582 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:23:34.462602  510582 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:23:34Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:23:34.462694  510582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:23:34.493028  510582 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:23:34.493050  510582 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:23:34.493109  510582 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:23:34.514916  510582 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:23:34.515323  510582 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-949993" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:34.515444  510582 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-949993" cluster setting kubeconfig missing "no-preload-949993" context setting]
	I1129 10:23:34.515707  510582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.517173  510582 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:23:34.538453  510582 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 10:23:34.538487  510582 kubeadm.go:602] duration metric: took 45.43138ms to restartPrimaryControlPlane
	I1129 10:23:34.538499  510582 kubeadm.go:403] duration metric: took 224.080463ms to StartCluster
	I1129 10:23:34.538514  510582 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.538584  510582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:34.539276  510582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:34.539485  510582 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:23:34.539835  510582 config.go:182] Loaded profile config "no-preload-949993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:34.539895  510582 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:23:34.539971  510582 addons.go:70] Setting storage-provisioner=true in profile "no-preload-949993"
	I1129 10:23:34.539990  510582 addons.go:239] Setting addon storage-provisioner=true in "no-preload-949993"
	W1129 10:23:34.539996  510582 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:23:34.540021  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.540557  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.540820  510582 addons.go:70] Setting dashboard=true in profile "no-preload-949993"
	I1129 10:23:34.540842  510582 addons.go:239] Setting addon dashboard=true in "no-preload-949993"
	W1129 10:23:34.540851  510582 addons.go:248] addon dashboard should already be in state true
	I1129 10:23:34.540875  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.541271  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.544485  510582 out.go:179] * Verifying Kubernetes components...
	I1129 10:23:34.545062  510582 addons.go:70] Setting default-storageclass=true in profile "no-preload-949993"
	I1129 10:23:34.545264  510582 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-949993"
	I1129 10:23:34.545663  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.554267  510582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:34.602008  510582 addons.go:239] Setting addon default-storageclass=true in "no-preload-949993"
	W1129 10:23:34.602031  510582 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:23:34.602055  510582 host.go:66] Checking if "no-preload-949993" exists ...
	I1129 10:23:34.603326  510582 cli_runner.go:164] Run: docker container inspect no-preload-949993 --format={{.State.Status}}
	I1129 10:23:34.615053  510582 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:23:34.616089  510582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:23:34.629458  510582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:34.629489  510582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:23:34.629503  510582 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:23:35.259018  507966 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.398524551s
	I1129 10:23:36.911024  507966 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.051014652s
	I1129 10:23:37.862258  507966 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002004509s
	I1129 10:23:37.890551  507966 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:23:37.906864  507966 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:23:37.927922  507966 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:23:37.928146  507966 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-194354 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:23:37.947269  507966 kubeadm.go:319] [bootstrap-token] Using token: da5774.b3xvqvayofuxejdl
	I1129 10:23:37.950242  507966 out.go:252]   - Configuring RBAC rules ...
	I1129 10:23:37.950371  507966 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:23:37.956917  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:23:37.973044  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:23:37.978328  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:23:37.985481  507966 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:23:37.993861  507966 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:23:38.271786  507966 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:23:38.834511  507966 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:23:39.269570  507966 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:23:39.270926  507966 kubeadm.go:319] 
	I1129 10:23:39.271005  507966 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:23:39.271011  507966 kubeadm.go:319] 
	I1129 10:23:39.271084  507966 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:23:39.271088  507966 kubeadm.go:319] 
	I1129 10:23:39.271112  507966 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:23:39.271167  507966 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:23:39.271215  507966 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:23:39.271219  507966 kubeadm.go:319] 
	I1129 10:23:39.271270  507966 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:23:39.271274  507966 kubeadm.go:319] 
	I1129 10:23:39.271324  507966 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:23:39.271328  507966 kubeadm.go:319] 
	I1129 10:23:39.271377  507966 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:23:39.271447  507966 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:23:39.271511  507966 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:23:39.271515  507966 kubeadm.go:319] 
	I1129 10:23:39.271595  507966 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:23:39.271667  507966 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:23:39.271671  507966 kubeadm.go:319] 
	I1129 10:23:39.274443  507966 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token da5774.b3xvqvayofuxejdl \
	I1129 10:23:39.274618  507966 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:23:39.274665  507966 kubeadm.go:319] 	--control-plane 
	I1129 10:23:39.274685  507966 kubeadm.go:319] 
	I1129 10:23:39.274790  507966 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:23:39.274824  507966 kubeadm.go:319] 
	I1129 10:23:39.274937  507966 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token da5774.b3xvqvayofuxejdl \
	I1129 10:23:39.275069  507966 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:23:39.277245  507966 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 10:23:39.277463  507966 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:23:39.277563  507966 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:23:39.277578  507966 cni.go:84] Creating CNI manager for ""
	I1129 10:23:39.277584  507966 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:23:39.280958  507966 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 10:23:34.629562  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.632538  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:23:34.632566  510582 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:23:34.632643  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.659316  510582 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:34.659338  510582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:23:34.659401  510582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-949993
	I1129 10:23:34.675695  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:34.700605  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:34.711421  510582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/no-preload-949993/id_rsa Username:docker}
	I1129 10:23:35.019252  510582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:35.053107  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:35.085381  510582 node_ready.go:35] waiting up to 6m0s for node "no-preload-949993" to be "Ready" ...
	I1129 10:23:35.107451  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:35.115826  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:23:35.115909  510582 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:23:35.181608  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:23:35.181686  510582 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:23:35.362957  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:23:35.363035  510582 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:23:35.522529  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:23:35.522606  510582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:23:35.587227  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:23:35.587306  510582 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:23:35.636645  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:23:35.636719  510582 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:23:35.670183  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:23:35.670257  510582 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:23:35.711342  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:23:35.711415  510582 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:23:35.762502  510582 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:23:35.762579  510582 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:23:35.803385  510582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:23:39.283847  507966 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:23:39.288953  507966 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 10:23:39.289014  507966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:23:39.312282  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:23:39.959493  507966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:23:39.959625  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:39.959682  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-194354 minikube.k8s.io/updated_at=2025_11_29T10_23_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=default-k8s-diff-port-194354 minikube.k8s.io/primary=true
	I1129 10:23:40.361401  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:40.361495  507966 ops.go:34] apiserver oom_adj: -16
	I1129 10:23:40.861451  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.361873  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.862097  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:42.362096  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:42.861458  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:41.384883  510582 node_ready.go:49] node "no-preload-949993" is "Ready"
	I1129 10:23:41.384912  510582 node_ready.go:38] duration metric: took 6.299436812s for node "no-preload-949993" to be "Ready" ...
	I1129 10:23:41.384926  510582 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:23:41.384987  510582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:23:41.624086  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.570863747s)
	I1129 10:23:43.322773  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.215226839s)
	I1129 10:23:43.322905  510582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.519441572s)
	I1129 10:23:43.323071  510582 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.938072916s)
	I1129 10:23:43.323086  510582 api_server.go:72] duration metric: took 8.783570641s to wait for apiserver process to appear ...
	I1129 10:23:43.323092  510582 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:23:43.323109  510582 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:23:43.326153  510582 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-949993 addons enable metrics-server
	
	I1129 10:23:43.329073  510582 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1129 10:23:43.362405  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:43.861894  507966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:23:44.020361  507966 kubeadm.go:1114] duration metric: took 4.060786248s to wait for elevateKubeSystemPrivileges
	I1129 10:23:44.020470  507966 kubeadm.go:403] duration metric: took 25.977646325s to StartCluster
	I1129 10:23:44.020504  507966 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:44.020633  507966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:23:44.021727  507966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:23:44.022131  507966 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:23:44.022387  507966 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:23:44.022439  507966 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:23:44.022503  507966 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-194354"
	I1129 10:23:44.022516  507966 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-194354"
	I1129 10:23:44.022540  507966 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:23:44.023006  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.022156  507966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:23:44.023473  507966 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-194354"
	I1129 10:23:44.023496  507966 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-194354"
	I1129 10:23:44.023827  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.026776  507966 out.go:179] * Verifying Kubernetes components...
	I1129 10:23:44.034308  507966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:23:44.060803  507966 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-194354"
	I1129 10:23:44.060841  507966 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:23:44.061279  507966 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:23:44.086997  507966 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:23:43.332070  510582 addons.go:530] duration metric: took 8.792168658s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1129 10:23:43.332598  510582 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:23:43.334000  510582 api_server.go:141] control plane version: v1.34.1
	I1129 10:23:43.334019  510582 api_server.go:131] duration metric: took 10.921442ms to wait for apiserver health ...
	I1129 10:23:43.334028  510582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:23:43.338676  510582 system_pods.go:59] 8 kube-system pods found
	I1129 10:23:43.338760  510582 system_pods.go:61] "coredns-66bc5c9577-vcgbt" [52333222-cd4d-4c66-aa3e-1aa0fa9e1078] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:23:43.338788  510582 system_pods.go:61] "etcd-no-preload-949993" [bb193cc4-411c-4510-b2a7-b0b8addac524] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:23:43.338829  510582 system_pods.go:61] "kindnet-jxmnq" [fb632bfa-f7ff-459c-8b50-8213e1d36462] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:23:43.338864  510582 system_pods.go:61] "kube-apiserver-no-preload-949993" [5c425dd3-47dc-407c-bc55-901fe9865e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:23:43.338896  510582 system_pods.go:61] "kube-controller-manager-no-preload-949993" [3790d691-d776-4601-a6bc-b18bc83000ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:23:43.338920  510582 system_pods.go:61] "kube-proxy-ffl4g" [f62b4d17-773c-4a38-ba6c-4ac103f38b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:23:43.338953  510582 system_pods.go:61] "kube-scheduler-no-preload-949993" [f54fa329-43bd-4885-b598-cefa9e6f1e0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:23:43.338982  510582 system_pods.go:61] "storage-provisioner" [b85d010c-01c5-42c7-83b9-578437039e17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:23:43.339005  510582 system_pods.go:74] duration metric: took 4.970521ms to wait for pod list to return data ...
	I1129 10:23:43.339026  510582 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:23:43.346685  510582 default_sa.go:45] found service account: "default"
	I1129 10:23:43.346710  510582 default_sa.go:55] duration metric: took 7.663155ms for default service account to be created ...
	I1129 10:23:43.346720  510582 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:23:43.352409  510582 system_pods.go:86] 8 kube-system pods found
	I1129 10:23:43.352440  510582 system_pods.go:89] "coredns-66bc5c9577-vcgbt" [52333222-cd4d-4c66-aa3e-1aa0fa9e1078] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:23:43.352450  510582 system_pods.go:89] "etcd-no-preload-949993" [bb193cc4-411c-4510-b2a7-b0b8addac524] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:23:43.352458  510582 system_pods.go:89] "kindnet-jxmnq" [fb632bfa-f7ff-459c-8b50-8213e1d36462] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:23:43.352465  510582 system_pods.go:89] "kube-apiserver-no-preload-949993" [5c425dd3-47dc-407c-bc55-901fe9865e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:23:43.352471  510582 system_pods.go:89] "kube-controller-manager-no-preload-949993" [3790d691-d776-4601-a6bc-b18bc83000ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:23:43.352477  510582 system_pods.go:89] "kube-proxy-ffl4g" [f62b4d17-773c-4a38-ba6c-4ac103f38b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:23:43.352483  510582 system_pods.go:89] "kube-scheduler-no-preload-949993" [f54fa329-43bd-4885-b598-cefa9e6f1e0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:23:43.352496  510582 system_pods.go:89] "storage-provisioner" [b85d010c-01c5-42c7-83b9-578437039e17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:23:43.352503  510582 system_pods.go:126] duration metric: took 5.777307ms to wait for k8s-apps to be running ...
	I1129 10:23:43.352513  510582 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:23:43.352571  510582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:23:43.446329  510582 system_svc.go:56] duration metric: took 93.806221ms WaitForService to wait for kubelet
	I1129 10:23:43.446362  510582 kubeadm.go:587] duration metric: took 8.906844286s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:23:43.446385  510582 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:23:43.468875  510582 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:23:43.468912  510582 node_conditions.go:123] node cpu capacity is 2
	I1129 10:23:43.468925  510582 node_conditions.go:105] duration metric: took 22.533969ms to run NodePressure ...
	I1129 10:23:43.468937  510582 start.go:242] waiting for startup goroutines ...
	I1129 10:23:43.468956  510582 start.go:247] waiting for cluster config update ...
	I1129 10:23:43.468968  510582 start.go:256] writing updated cluster config ...
	I1129 10:23:43.469217  510582 ssh_runner.go:195] Run: rm -f paused
	I1129 10:23:43.474670  510582 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:23:43.483215  510582 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vcgbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:23:44.090345  507966 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:44.090370  507966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:23:44.090450  507966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:23:44.100874  507966 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:44.100899  507966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:23:44.100968  507966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:23:44.132523  507966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:23:44.143267  507966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:23:44.401252  507966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:23:44.431702  507966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:23:44.535956  507966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:23:44.591821  507966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:23:45.161227  507966 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1129 10:23:45.162334  507966 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:23:45.690965  507966 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-194354" context rescaled to 1 replicas
	I1129 10:23:45.753803  507966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217763289s)
	I1129 10:23:45.753872  507966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161979524s)
	I1129 10:23:45.770491  507966 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 10:23:45.773465  507966 addons.go:530] duration metric: took 1.751018181s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1129 10:23:47.165805  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:45.542622  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:47.989562  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:49.666635  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:52.166177  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:49.990541  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:51.990931  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:54.666022  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:57.166497  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:23:54.489879  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:56.497926  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:58.989190  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:23:59.665270  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:02.165677  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:00.990865  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:03.488362  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:04.166335  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:06.665750  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:05.988397  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:07.989472  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:08.666205  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:11.166292  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:10.488633  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:12.489841  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:13.665315  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:15.665945  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:18.165215  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:14.989495  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	W1129 10:24:17.494318  510582 pod_ready.go:104] pod "coredns-66bc5c9577-vcgbt" is not "Ready", error: <nil>
	I1129 10:24:18.988496  510582 pod_ready.go:94] pod "coredns-66bc5c9577-vcgbt" is "Ready"
	I1129 10:24:18.988523  510582 pod_ready.go:86] duration metric: took 35.505241752s for pod "coredns-66bc5c9577-vcgbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.991259  510582 pod_ready.go:83] waiting for pod "etcd-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.996288  510582 pod_ready.go:94] pod "etcd-no-preload-949993" is "Ready"
	I1129 10:24:18.996320  510582 pod_ready.go:86] duration metric: took 5.032577ms for pod "etcd-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:18.998454  510582 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.003569  510582 pod_ready.go:94] pod "kube-apiserver-no-preload-949993" is "Ready"
	I1129 10:24:19.003603  510582 pod_ready.go:86] duration metric: took 5.120225ms for pod "kube-apiserver-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.006932  510582 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.186848  510582 pod_ready.go:94] pod "kube-controller-manager-no-preload-949993" is "Ready"
	I1129 10:24:19.186884  510582 pod_ready.go:86] duration metric: took 179.913065ms for pod "kube-controller-manager-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.386987  510582 pod_ready.go:83] waiting for pod "kube-proxy-ffl4g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.786916  510582 pod_ready.go:94] pod "kube-proxy-ffl4g" is "Ready"
	I1129 10:24:19.786948  510582 pod_ready.go:86] duration metric: took 399.93109ms for pod "kube-proxy-ffl4g" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:19.987148  510582 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:20.386279  510582 pod_ready.go:94] pod "kube-scheduler-no-preload-949993" is "Ready"
	I1129 10:24:20.386307  510582 pod_ready.go:86] duration metric: took 399.132959ms for pod "kube-scheduler-no-preload-949993" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:20.386321  510582 pod_ready.go:40] duration metric: took 36.911571212s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:20.443695  510582 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:24:20.446728  510582 out.go:179] * Done! kubectl is now configured to use "no-preload-949993" cluster and "default" namespace by default
	W1129 10:24:20.165619  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:22.665117  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	W1129 10:24:24.665872  507966 node_ready.go:57] node "default-k8s-diff-port-194354" has "Ready":"False" status (will retry)
	I1129 10:24:25.667952  507966 node_ready.go:49] node "default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:25.667980  507966 node_ready.go:38] duration metric: took 40.505616922s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:24:25.667993  507966 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:24:25.668053  507966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:24:25.681244  507966 api_server.go:72] duration metric: took 41.65902631s to wait for apiserver process to appear ...
	I1129 10:24:25.681270  507966 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:24:25.681290  507966 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1129 10:24:25.691496  507966 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1129 10:24:25.692557  507966 api_server.go:141] control plane version: v1.34.1
	I1129 10:24:25.692579  507966 api_server.go:131] duration metric: took 11.302624ms to wait for apiserver health ...
	I1129 10:24:25.692588  507966 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:24:25.696448  507966 system_pods.go:59] 8 kube-system pods found
	I1129 10:24:25.696487  507966 system_pods.go:61] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:25.696494  507966 system_pods.go:61] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:25.696501  507966 system_pods.go:61] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:25.696506  507966 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:25.696510  507966 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:25.696516  507966 system_pods.go:61] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:25.696519  507966 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:25.696526  507966 system_pods.go:61] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:25.696532  507966 system_pods.go:74] duration metric: took 3.939058ms to wait for pod list to return data ...
	I1129 10:24:25.696552  507966 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:24:25.699312  507966 default_sa.go:45] found service account: "default"
	I1129 10:24:25.699338  507966 default_sa.go:55] duration metric: took 2.780062ms for default service account to be created ...
	I1129 10:24:25.699349  507966 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:24:25.702253  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:25.702291  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:25.702298  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:25.702305  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:25.702310  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:25.702317  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:25.702322  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:25.702330  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:25.702336  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:25.702357  507966 retry.go:31] will retry after 310.669836ms: missing components: kube-dns
	I1129 10:24:26.019886  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.019926  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:26.019934  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.019940  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.019944  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.019949  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.019954  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.019959  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.019965  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:26.019986  507966 retry.go:31] will retry after 286.170038ms: missing components: kube-dns
	I1129 10:24:26.310844  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.310881  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:24:26.310888  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.310896  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.310902  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.310907  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.310911  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.310917  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.310927  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:24:26.310957  507966 retry.go:31] will retry after 343.061865ms: missing components: kube-dns
	I1129 10:24:26.658151  507966 system_pods.go:86] 8 kube-system pods found
	I1129 10:24:26.658188  507966 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Running
	I1129 10:24:26.658196  507966 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running
	I1129 10:24:26.658201  507966 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:24:26.658206  507966 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running
	I1129 10:24:26.658210  507966 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running
	I1129 10:24:26.658214  507966 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running
	I1129 10:24:26.658218  507966 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:24:26.658222  507966 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Running
	I1129 10:24:26.658229  507966 system_pods.go:126] duration metric: took 958.87435ms to wait for k8s-apps to be running ...
	I1129 10:24:26.658241  507966 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:24:26.658299  507966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:24:26.671648  507966 system_svc.go:56] duration metric: took 13.39846ms WaitForService to wait for kubelet
	I1129 10:24:26.671679  507966 kubeadm.go:587] duration metric: took 42.649466351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:24:26.671698  507966 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:24:26.674832  507966 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:24:26.674866  507966 node_conditions.go:123] node cpu capacity is 2
	I1129 10:24:26.674881  507966 node_conditions.go:105] duration metric: took 3.178597ms to run NodePressure ...
	I1129 10:24:26.674895  507966 start.go:242] waiting for startup goroutines ...
	I1129 10:24:26.674903  507966 start.go:247] waiting for cluster config update ...
	I1129 10:24:26.674915  507966 start.go:256] writing updated cluster config ...
	I1129 10:24:26.675229  507966 ssh_runner.go:195] Run: rm -f paused
	I1129 10:24:26.679081  507966 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:26.683003  507966 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.688164  507966 pod_ready.go:94] pod "coredns-66bc5c9577-8rvzs" is "Ready"
	I1129 10:24:26.688202  507966 pod_ready.go:86] duration metric: took 5.168069ms for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.690538  507966 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.696711  507966 pod_ready.go:94] pod "etcd-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:26.696743  507966 pod_ready.go:86] duration metric: took 6.17568ms for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.699232  507966 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.704231  507966 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:26.704300  507966 pod_ready.go:86] duration metric: took 5.037541ms for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:26.706968  507966 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.083799  507966 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:27.083828  507966 pod_ready.go:86] duration metric: took 376.798157ms for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.283531  507966 pod_ready.go:83] waiting for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.683679  507966 pod_ready.go:94] pod "kube-proxy-68szw" is "Ready"
	I1129 10:24:27.683710  507966 pod_ready.go:86] duration metric: took 400.149827ms for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:27.884031  507966 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:28.283309  507966 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-194354" is "Ready"
	I1129 10:24:28.283387  507966 pod_ready.go:86] duration metric: took 399.326474ms for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:24:28.283410  507966 pod_ready.go:40] duration metric: took 1.60429409s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:24:28.339155  507966 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:24:28.342238  507966 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-194354" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:24:26 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:26.05826431Z" level=info msg="Created container f1169e5eccc12fe7ad8fe4b4f0da101414c0426c215b0b9f04c7efb66262789a: kube-system/coredns-66bc5c9577-8rvzs/coredns" id=383f23ab-d8ca-455e-ba3e-897840a8ef0d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:24:26 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:26.059420885Z" level=info msg="Starting container: f1169e5eccc12fe7ad8fe4b4f0da101414c0426c215b0b9f04c7efb66262789a" id=1b55e233-518d-4559-8cec-70be9e663607 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:24:26 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:26.062879462Z" level=info msg="Started container" PID=1740 containerID=f1169e5eccc12fe7ad8fe4b4f0da101414c0426c215b0b9f04c7efb66262789a description=kube-system/coredns-66bc5c9577-8rvzs/coredns id=1b55e233-518d-4559-8cec-70be9e663607 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e8afce99f4482552eeab7e9792015302d74ee304c54dcfd8b3fed768fa8a5c9
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.907483518Z" level=info msg="Running pod sandbox: default/busybox/POD" id=93aef7d9-935e-4da4-90ef-c145b23907b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.907562567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.913953117Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d6adffb624190e9e4bdac1912666384c7f0dc931c38d71d621b23d9c237f38b1 UID:6a6a6bef-631a-4303-be59-a408f7f63f1e NetNS:/var/run/netns/0edf2da8-59c3-432e-bfcb-60cd51cd288f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049b6b0}] Aliases:map[]}"
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.914202851Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.925429692Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d6adffb624190e9e4bdac1912666384c7f0dc931c38d71d621b23d9c237f38b1 UID:6a6a6bef-631a-4303-be59-a408f7f63f1e NetNS:/var/run/netns/0edf2da8-59c3-432e-bfcb-60cd51cd288f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049b6b0}] Aliases:map[]}"
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.925818381Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.930379125Z" level=info msg="Ran pod sandbox d6adffb624190e9e4bdac1912666384c7f0dc931c38d71d621b23d9c237f38b1 with infra container: default/busybox/POD" id=93aef7d9-935e-4da4-90ef-c145b23907b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.932790452Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6b053efb-6713-4958-87c3-60fcf4399bc1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.932927906Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6b053efb-6713-4958-87c3-60fcf4399bc1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.932979238Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6b053efb-6713-4958-87c3-60fcf4399bc1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.934741564Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=853cbd3c-cad8-40a1-895f-60062b43e0b5 name=/runtime.v1.ImageService/PullImage
	Nov 29 10:24:28 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:28.937841917Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.001561323Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=853cbd3c-cad8-40a1-895f-60062b43e0b5 name=/runtime.v1.ImageService/PullImage
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.009767426Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=86626dbc-8d80-400d-acc0-b19f4dbfa695 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.015483301Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2df55204-76e1-40ab-858a-7ccb3251a204 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.023864455Z" level=info msg="Creating container: default/busybox/busybox" id=653d11ec-4f72-4f02-b178-b431aece6709 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.023977621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.028947444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.029566044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.053656718Z" level=info msg="Created container 67b89ef28c7567a97e9ecd7f02ebaa8521405d92427c4b0ac364d539bca5571f: default/busybox/busybox" id=653d11ec-4f72-4f02-b178-b431aece6709 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.054788145Z" level=info msg="Starting container: 67b89ef28c7567a97e9ecd7f02ebaa8521405d92427c4b0ac364d539bca5571f" id=3808758e-94db-473b-9bbf-82043e59a90d name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:24:31 default-k8s-diff-port-194354 crio[839]: time="2025-11-29T10:24:31.056994078Z" level=info msg="Started container" PID=1792 containerID=67b89ef28c7567a97e9ecd7f02ebaa8521405d92427c4b0ac364d539bca5571f description=default/busybox/busybox id=3808758e-94db-473b-9bbf-82043e59a90d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6adffb624190e9e4bdac1912666384c7f0dc931c38d71d621b23d9c237f38b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	67b89ef28c756       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   d6adffb624190       busybox                                                default
	f1169e5eccc12       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   9e8afce99f448       coredns-66bc5c9577-8rvzs                               kube-system
	e7b599628fd15       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   558580c029df4       storage-provisioner                                    kube-system
	cfdcb31d22025       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   fa0c4f072b0f8       kube-proxy-68szw                                       kube-system
	f191703cd4d99       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   00b69769c486f       kindnet-7xnqr                                          kube-system
	46da48399b8a1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   fa19c2dbf4d23       kube-scheduler-default-k8s-diff-port-194354            kube-system
	67005ae9393de       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   f0194cd0c76cd       kube-controller-manager-default-k8s-diff-port-194354   kube-system
	7a1cc475cd81c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   53bedd4c9ad4a       etcd-default-k8s-diff-port-194354                      kube-system
	f3d2e0b7cf0a3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   c19d6322fc54e       kube-apiserver-default-k8s-diff-port-194354            kube-system
	
	
	==> coredns [f1169e5eccc12fe7ad8fe4b4f0da101414c0426c215b0b9f04c7efb66262789a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54513 - 38994 "HINFO IN 6126268274612099660.676145215045565219. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004399535s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-194354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-194354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-194354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_23_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:23:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-194354
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:24:30 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:24:30 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:24:30 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:24:30 +0000   Sat, 29 Nov 2025 10:24:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-194354
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                c5e28edc-52c7-4b90-b67b-b957ca9e0425
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-8rvzs                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-194354                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-7xnqr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-194354             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-194354    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-68szw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-194354             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-194354 event: Registered Node default-k8s-diff-port-194354 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-194354 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 09:55] overlayfs: idmapped layers are currently not supported
	[Nov29 09:57] overlayfs: idmapped layers are currently not supported
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7a1cc475cd81c433bf74e91a96074d7afd0bfe2b06fca3e3694d18b59f4edb25] <==
	{"level":"warn","ts":"2025-11-29T10:23:31.471724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.531135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.583612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.641701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.696636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.777730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.806681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.879035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.924103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:31.989450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.034732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.097783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.142829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.162742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.186568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.277885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.314325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.334311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.359985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.398438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.477822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.498326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.539644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.578787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:23:32.765793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47468","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:24:39 up  3:07,  0 user,  load average: 4.02, 3.73, 2.83
	Linux default-k8s-diff-port-194354 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f191703cd4d991dd2c34a3efbe4c897841399bf93379aca15296f8e477597b5e] <==
	I1129 10:23:45.018452       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:23:45.018772       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:23:45.018922       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:23:45.018935       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:23:45.018947       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:23:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:23:45.319903       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:23:45.319931       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:23:45.319940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:23:45.320310       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:24:15.319431       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1129 10:24:15.319431       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:24:15.320389       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:24:15.320389       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1129 10:24:16.920111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:24:16.920142       1 metrics.go:72] Registering metrics
	I1129 10:24:16.920215       1 controller.go:711] "Syncing nftables rules"
	I1129 10:24:25.259321       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:24:25.259380       1 main.go:301] handling current node
	I1129 10:24:35.254883       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:24:35.254995       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f3d2e0b7cf0a3cafaeac0db4bd4e601acb078c6abdccd3b265f1c7badd966616] <==
	E1129 10:23:35.429635       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1129 10:23:35.469164       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:23:35.549383       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:23:35.549655       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 10:23:35.562057       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 10:23:35.630801       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:23:35.631971       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:23:35.736339       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:23:35.843686       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 10:23:35.843714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:23:37.325841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:23:37.396019       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:23:37.474327       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 10:23:37.483232       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 10:23:37.484644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:23:37.493285       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:23:38.379402       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:23:38.804256       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:23:38.833078       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 10:23:38.850600       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 10:23:43.765469       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 10:23:44.187501       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:23:44.397018       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:23:44.420665       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1129 10:24:36.774499       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:49532: use of closed network connection
	
	
	==> kube-controller-manager [67005ae9393dee3c82d6bd9b0f65b1497aaea71abe2c7dbad4dad3ed1cb5e9fe] <==
	I1129 10:23:43.434375       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 10:23:43.434401       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 10:23:43.434406       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 10:23:43.434413       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 10:23:43.453830       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:23:43.456315       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:23:43.456424       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 10:23:43.456646       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-194354" podCIDRs=["10.244.0.0/24"]
	I1129 10:23:43.458330       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:23:43.458403       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 10:23:43.459777       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 10:23:43.459868       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 10:23:43.462980       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 10:23:43.470692       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:23:43.473634       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 10:23:43.482349       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:23:43.485337       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:23:43.486505       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:23:43.490902       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:23:43.506786       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:23:43.506813       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:23:43.506820       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:23:43.513614       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:23:43.518588       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:24:28.438516       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cfdcb31d22025008b7e266f184580dfb656ec5f7a627d9899881048b491a62da] <==
	I1129 10:23:44.958959       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:23:45.071234       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:23:45.194407       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:23:45.205883       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 10:23:45.206394       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:23:45.742341       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:23:45.742461       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:23:45.755554       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:23:45.755926       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:23:45.756136       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:23:45.757703       1 config.go:200] "Starting service config controller"
	I1129 10:23:45.757775       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:23:45.757818       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:23:45.757865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:23:45.757904       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:23:45.757945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:23:45.758701       1 config.go:309] "Starting node config controller"
	I1129 10:23:45.758768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:23:45.758802       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:23:45.858817       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:23:45.858837       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:23:45.858871       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [46da48399b8a13311ce8ac8ebb54e37311f46a510a16b7e5860ef9815a2d405f] <==
	I1129 10:23:36.842235       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:23:36.851604       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:23:36.851716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:23:36.851743       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:23:36.851768       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 10:23:36.871199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:23:36.871282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:23:36.871324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:23:36.871367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:23:36.871416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:23:36.871461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 10:23:36.898237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:23:36.898325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:23:36.898378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:23:36.898441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 10:23:36.898503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 10:23:36.898578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:23:36.898638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 10:23:36.898727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:23:36.905329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 10:23:36.905904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 10:23:36.911549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:23:36.911635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 10:23:36.911689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1129 10:23:38.154294       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:23:43.844940    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ff1a8b1a-a98a-4887-b839-3844c313dec0-cni-cfg\") pod \"kindnet-7xnqr\" (UID: \"ff1a8b1a-a98a-4887-b839-3844c313dec0\") " pod="kube-system/kindnet-7xnqr"
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:23:43.844960    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff1a8b1a-a98a-4887-b839-3844c313dec0-xtables-lock\") pod \"kindnet-7xnqr\" (UID: \"ff1a8b1a-a98a-4887-b839-3844c313dec0\") " pod="kube-system/kindnet-7xnqr"
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:23:43.844976    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff1a8b1a-a98a-4887-b839-3844c313dec0-lib-modules\") pod \"kindnet-7xnqr\" (UID: \"ff1a8b1a-a98a-4887-b839-3844c313dec0\") " pod="kube-system/kindnet-7xnqr"
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: E1129 10:23:43.978694    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: E1129 10:23:43.978743    1314 projected.go:196] Error preparing data for projected volume kube-api-access-2d98l for pod kube-system/kube-proxy-68szw: configmap "kube-root-ca.crt" not found
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: E1129 10:23:43.978844    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/83d34300-8cc0-4093-a15c-44193588a880-kube-api-access-2d98l podName:83d34300-8cc0-4093-a15c-44193588a880 nodeName:}" failed. No retries permitted until 2025-11-29 10:23:44.478798876 +0000 UTC m=+5.766541446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2d98l" (UniqueName: "kubernetes.io/projected/83d34300-8cc0-4093-a15c-44193588a880-kube-api-access-2d98l") pod "kube-proxy-68szw" (UID: "83d34300-8cc0-4093-a15c-44193588a880") : configmap "kube-root-ca.crt" not found
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: E1129 10:23:43.982591    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: E1129 10:23:43.982627    1314 projected.go:196] Error preparing data for projected volume kube-api-access-8wzmn for pod kube-system/kindnet-7xnqr: configmap "kube-root-ca.crt" not found
	Nov 29 10:23:43 default-k8s-diff-port-194354 kubelet[1314]: E1129 10:23:43.982714    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff1a8b1a-a98a-4887-b839-3844c313dec0-kube-api-access-8wzmn podName:ff1a8b1a-a98a-4887-b839-3844c313dec0 nodeName:}" failed. No retries permitted until 2025-11-29 10:23:44.482675976 +0000 UTC m=+5.770418546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wzmn" (UniqueName: "kubernetes.io/projected/ff1a8b1a-a98a-4887-b839-3844c313dec0-kube-api-access-8wzmn") pod "kindnet-7xnqr" (UID: "ff1a8b1a-a98a-4887-b839-3844c313dec0") : configmap "kube-root-ca.crt" not found
	Nov 29 10:23:44 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:23:44.561517    1314 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 10:23:44 default-k8s-diff-port-194354 kubelet[1314]: W1129 10:23:44.718541    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/crio-fa0c4f072b0f8db703c1dac54eb38a5fe64efdaec1f72fe2987c8220e88d2cce WatchSource:0}: Error finding container fa0c4f072b0f8db703c1dac54eb38a5fe64efdaec1f72fe2987c8220e88d2cce: Status 404 returned error can't find the container with id fa0c4f072b0f8db703c1dac54eb38a5fe64efdaec1f72fe2987c8220e88d2cce
	Nov 29 10:23:44 default-k8s-diff-port-194354 kubelet[1314]: W1129 10:23:44.742171    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/crio-00b69769c486fdf30fe590fef836e81da00ec74f9952b8bb65077beff34f72db WatchSource:0}: Error finding container 00b69769c486fdf30fe590fef836e81da00ec74f9952b8bb65077beff34f72db: Status 404 returned error can't find the container with id 00b69769c486fdf30fe590fef836e81da00ec74f9952b8bb65077beff34f72db
	Nov 29 10:23:45 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:23:45.411858    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7xnqr" podStartSLOduration=2.411837767 podStartE2EDuration="2.411837767s" podCreationTimestamp="2025-11-29 10:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:23:45.302770525 +0000 UTC m=+6.590513103" watchObservedRunningTime="2025-11-29 10:23:45.411837767 +0000 UTC m=+6.699580337"
	Nov 29 10:23:45 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:23:45.721896    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-68szw" podStartSLOduration=2.721876721 podStartE2EDuration="2.721876721s" podCreationTimestamp="2025-11-29 10:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:23:45.563549353 +0000 UTC m=+6.851291939" watchObservedRunningTime="2025-11-29 10:23:45.721876721 +0000 UTC m=+7.009619299"
	Nov 29 10:24:25 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:25.602598    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 10:24:25 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:25.758622    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/037289ba-251c-44d8-890b-5015790b2440-tmp\") pod \"storage-provisioner\" (UID: \"037289ba-251c-44d8-890b-5015790b2440\") " pod="kube-system/storage-provisioner"
	Nov 29 10:24:25 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:25.759456    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4r8\" (UniqueName: \"kubernetes.io/projected/037289ba-251c-44d8-890b-5015790b2440-kube-api-access-gg4r8\") pod \"storage-provisioner\" (UID: \"037289ba-251c-44d8-890b-5015790b2440\") " pod="kube-system/storage-provisioner"
	Nov 29 10:24:25 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:25.759491    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64-config-volume\") pod \"coredns-66bc5c9577-8rvzs\" (UID: \"cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64\") " pod="kube-system/coredns-66bc5c9577-8rvzs"
	Nov 29 10:24:25 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:25.759544    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7td6h\" (UniqueName: \"kubernetes.io/projected/cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64-kube-api-access-7td6h\") pod \"coredns-66bc5c9577-8rvzs\" (UID: \"cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64\") " pod="kube-system/coredns-66bc5c9577-8rvzs"
	Nov 29 10:24:25 default-k8s-diff-port-194354 kubelet[1314]: W1129 10:24:25.969939    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/crio-558580c029df41609a139ab830f9364b6763d5e2bd99168b759dccb7099babf6 WatchSource:0}: Error finding container 558580c029df41609a139ab830f9364b6763d5e2bd99168b759dccb7099babf6: Status 404 returned error can't find the container with id 558580c029df41609a139ab830f9364b6763d5e2bd99168b759dccb7099babf6
	Nov 29 10:24:25 default-k8s-diff-port-194354 kubelet[1314]: W1129 10:24:25.997072    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/crio-9e8afce99f4482552eeab7e9792015302d74ee304c54dcfd8b3fed768fa8a5c9 WatchSource:0}: Error finding container 9e8afce99f4482552eeab7e9792015302d74ee304c54dcfd8b3fed768fa8a5c9: Status 404 returned error can't find the container with id 9e8afce99f4482552eeab7e9792015302d74ee304c54dcfd8b3fed768fa8a5c9
	Nov 29 10:24:26 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:26.406260    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.406238412 podStartE2EDuration="41.406238412s" podCreationTimestamp="2025-11-29 10:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:24:26.392294255 +0000 UTC m=+47.680036841" watchObservedRunningTime="2025-11-29 10:24:26.406238412 +0000 UTC m=+47.693980982"
	Nov 29 10:24:28 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:28.593944    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8rvzs" podStartSLOduration=44.593926791 podStartE2EDuration="44.593926791s" podCreationTimestamp="2025-11-29 10:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:24:26.406885386 +0000 UTC m=+47.694627964" watchObservedRunningTime="2025-11-29 10:24:28.593926791 +0000 UTC m=+49.881669377"
	Nov 29 10:24:28 default-k8s-diff-port-194354 kubelet[1314]: I1129 10:24:28.682795    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mpzx\" (UniqueName: \"kubernetes.io/projected/6a6a6bef-631a-4303-be59-a408f7f63f1e-kube-api-access-9mpzx\") pod \"busybox\" (UID: \"6a6a6bef-631a-4303-be59-a408f7f63f1e\") " pod="default/busybox"
	Nov 29 10:24:28 default-k8s-diff-port-194354 kubelet[1314]: W1129 10:24:28.928457    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/crio-d6adffb624190e9e4bdac1912666384c7f0dc931c38d71d621b23d9c237f38b1 WatchSource:0}: Error finding container d6adffb624190e9e4bdac1912666384c7f0dc931c38d71d621b23d9c237f38b1: Status 404 returned error can't find the container with id d6adffb624190e9e4bdac1912666384c7f0dc931c38d71d621b23d9c237f38b1
	
	
	==> storage-provisioner [e7b599628fd1524ea7f406dd1a0f1570c1ac87e6b1420949609c5e34bbaa6b7e] <==
	I1129 10:24:26.027980       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:24:26.047891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:24:26.048068       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:24:26.053951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:26.064758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:24:26.064978       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:24:26.065195       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-194354_9947fba2-ca44-44aa-815d-18e6d8f7b916!
	I1129 10:24:26.066376       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55fa2c2d-ed1d-4d3e-8f29-7e39e322961c", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-194354_9947fba2-ca44-44aa-815d-18e6d8f7b916 became leader
	W1129 10:24:26.088507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:26.093627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:24:26.165613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-194354_9947fba2-ca44-44aa-815d-18e6d8f7b916!
	W1129 10:24:28.097224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:28.104355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:30.108027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:30.115973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:32.119832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:32.124859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:34.128663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:34.136382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:36.140892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:36.146676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:38.150550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:24:38.162390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-194354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (385.616735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:25:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-156330
helpers_test.go:243: (dbg) docker inspect newest-cni-156330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275",
	        "Created": "2025-11-29T10:24:48.208994014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:24:48.283341334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/hostname",
	        "HostsPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/hosts",
	        "LogPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275-json.log",
	        "Name": "/newest-cni-156330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-156330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-156330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275",
	                "LowerDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-156330",
	                "Source": "/var/lib/docker/volumes/newest-cni-156330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-156330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-156330",
	                "name.minikube.sigs.k8s.io": "newest-cni-156330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "222b2069e981492f6813a290af6ba97176c967e98134d23d9a44f4c0f6d8acc2",
	            "SandboxKey": "/var/run/docker/netns/222b2069e981",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-156330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:26:0d:af:2e:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "296cf76a04b7032c7fa82b79716bf37121a065fecc07315bcd2905590381d495",
	                    "EndpointID": "5ef018b7883cce5994290c4ab4af767935da0f6b5bededc05d04036b4b4e3ec3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-156330",
	                        "3766eb449434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-156330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-156330 logs -n 25: (1.429719818s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p embed-certs-708011 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ delete  │ -p cert-expiration-930117                                                                                                                                                                                                                     │ cert-expiration-930117       │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ delete  │ -p disable-driver-mounts-259491                                                                                                                                                                                                               │ disable-driver-mounts-259491 │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:21 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                                                                                                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:23 UTC │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                                                                                                    │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ stop    │ -p default-k8s-diff-port-194354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-194354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:24:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:24:52.547143  516700 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:24:52.547361  516700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:24:52.547390  516700 out.go:374] Setting ErrFile to fd 2...
	I1129 10:24:52.547410  516700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:24:52.547718  516700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:24:52.548130  516700 out.go:368] Setting JSON to false
	I1129 10:24:52.551432  516700 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11242,"bootTime":1764400651,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:24:52.551543  516700 start.go:143] virtualization:  
	I1129 10:24:52.556688  516700 out.go:179] * [default-k8s-diff-port-194354] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:24:52.560019  516700 notify.go:221] Checking for updates...
	I1129 10:24:52.560548  516700 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:24:52.564585  516700 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:24:52.567560  516700 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:24:52.570574  516700 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:24:48.135081  515467 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-156330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.644754547s)
	I1129 10:24:48.135113  515467 kic.go:203] duration metric: took 4.644882894s to extract preloaded images to volume ...
	W1129 10:24:48.135279  515467 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1129 10:24:48.135381  515467 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 10:24:48.194843  515467 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-156330 --name newest-cni-156330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-156330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-156330 --network newest-cni-156330 --ip 192.168.76.2 --volume newest-cni-156330:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 10:24:48.509687  515467 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Running}}
	I1129 10:24:48.531672  515467 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:24:48.552944  515467 cli_runner.go:164] Run: docker exec newest-cni-156330 stat /var/lib/dpkg/alternatives/iptables
	I1129 10:24:48.606221  515467 oci.go:144] the created container "newest-cni-156330" has a running status.
	I1129 10:24:48.606249  515467 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa...
	I1129 10:24:49.005352  515467 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 10:24:49.026864  515467 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:24:49.050033  515467 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 10:24:49.050058  515467 kic_runner.go:114] Args: [docker exec --privileged newest-cni-156330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 10:24:49.117666  515467 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:24:49.141785  515467 machine.go:94] provisionDockerMachine start ...
	I1129 10:24:49.141893  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:49.164265  515467 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:49.164671  515467 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1129 10:24:49.164689  515467 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:24:49.165304  515467 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 10:24:52.357532  515467 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-156330
	
	I1129 10:24:52.357557  515467 ubuntu.go:182] provisioning hostname "newest-cni-156330"
	I1129 10:24:52.357621  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:52.389947  515467 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:52.390374  515467 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1129 10:24:52.390394  515467 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-156330 && echo "newest-cni-156330" | sudo tee /etc/hostname
	I1129 10:24:52.563855  515467 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-156330
	
	I1129 10:24:52.563979  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:52.576223  516700 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:24:52.579164  516700 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:24:52.582953  516700 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:24:52.583593  516700 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:24:52.626545  516700 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:24:52.626717  516700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:24:52.706401  516700 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:24:52.695270431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:24:52.706511  516700 docker.go:319] overlay module found
	I1129 10:24:52.709631  516700 out.go:179] * Using the docker driver based on existing profile
	I1129 10:24:52.712553  516700 start.go:309] selected driver: docker
	I1129 10:24:52.712575  516700 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-194354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-194354 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:24:52.712679  516700 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:24:52.713398  516700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:24:52.775596  516700 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-29 10:24:52.759363361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:24:52.775968  516700 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:24:52.775995  516700 cni.go:84] Creating CNI manager for ""
	I1129 10:24:52.776060  516700 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:24:52.776142  516700 start.go:353] cluster config:
	{Name:default-k8s-diff-port-194354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-194354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:24:52.779456  516700 out.go:179] * Starting "default-k8s-diff-port-194354" primary control-plane node in "default-k8s-diff-port-194354" cluster
	I1129 10:24:52.782201  516700 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:24:52.785167  516700 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:24:52.788724  516700 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:24:52.788847  516700 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:24:52.788866  516700 cache.go:65] Caching tarball of preloaded images
	I1129 10:24:52.788951  516700 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:24:52.788964  516700 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:24:52.789090  516700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/config.json ...
	I1129 10:24:52.788790  516700 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:24:52.817609  516700 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:24:52.817637  516700 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:24:52.817656  516700 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:24:52.817693  516700 start.go:360] acquireMachinesLock for default-k8s-diff-port-194354: {Name:mk7fca26c3bc028a411ed52e5a78e2fb6f90caca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:24:52.817774  516700 start.go:364] duration metric: took 57.461µs to acquireMachinesLock for "default-k8s-diff-port-194354"
	I1129 10:24:52.817799  516700 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:24:52.817817  516700 fix.go:54] fixHost starting: 
	I1129 10:24:52.818177  516700 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:24:52.846403  516700 fix.go:112] recreateIfNeeded on default-k8s-diff-port-194354: state=Stopped err=<nil>
	W1129 10:24:52.846440  516700 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 10:24:52.586903  515467 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:52.587225  515467 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1129 10:24:52.587247  515467 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-156330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-156330/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-156330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:24:52.771890  515467 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:24:52.771917  515467 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:24:52.771946  515467 ubuntu.go:190] setting up certificates
	I1129 10:24:52.771955  515467 provision.go:84] configureAuth start
	I1129 10:24:52.772014  515467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:24:52.792337  515467 provision.go:143] copyHostCerts
	I1129 10:24:52.792397  515467 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:24:52.792406  515467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:24:52.792476  515467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:24:52.792570  515467 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:24:52.792575  515467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:24:52.792600  515467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:24:52.792696  515467 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:24:52.792701  515467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:24:52.792725  515467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:24:52.792777  515467 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.newest-cni-156330 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-156330]
	I1129 10:24:53.101986  515467 provision.go:177] copyRemoteCerts
	I1129 10:24:53.102094  515467 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:24:53.102159  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:53.119345  515467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:24:53.241256  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 10:24:53.278181  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:24:53.305239  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:24:53.326427  515467 provision.go:87] duration metric: took 554.457063ms to configureAuth
	I1129 10:24:53.326451  515467 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:24:53.326644  515467 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:24:53.326741  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:53.346698  515467 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:53.347538  515467 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1129 10:24:53.347598  515467 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:24:53.732503  515467 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:24:53.732528  515467 machine.go:97] duration metric: took 4.590717694s to provisionDockerMachine
	I1129 10:24:53.732539  515467 client.go:176] duration metric: took 10.915098863s to LocalClient.Create
	I1129 10:24:53.732553  515467 start.go:167] duration metric: took 10.915167171s to libmachine.API.Create "newest-cni-156330"
	I1129 10:24:53.732560  515467 start.go:293] postStartSetup for "newest-cni-156330" (driver="docker")
	I1129 10:24:53.732579  515467 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:24:53.732655  515467 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:24:53.732701  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:53.758462  515467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:24:53.874891  515467 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:24:53.878250  515467 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:24:53.878292  515467 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:24:53.878305  515467 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:24:53.878357  515467 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:24:53.878442  515467 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:24:53.878552  515467 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:24:53.887347  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:24:53.912800  515467 start.go:296] duration metric: took 180.225165ms for postStartSetup
	I1129 10:24:53.913231  515467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:24:53.933034  515467 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/config.json ...
	I1129 10:24:53.933291  515467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:24:53.933329  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:53.957969  515467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:24:54.063307  515467 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:24:54.068208  515467 start.go:128] duration metric: took 11.254459029s to createHost
	I1129 10:24:54.068230  515467 start.go:83] releasing machines lock for "newest-cni-156330", held for 11.254594759s
	I1129 10:24:54.068309  515467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:24:54.084842  515467 ssh_runner.go:195] Run: cat /version.json
	I1129 10:24:54.084873  515467 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:24:54.084894  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:54.084926  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:24:54.105821  515467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:24:54.115970  515467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:24:54.301564  515467 ssh_runner.go:195] Run: systemctl --version
	I1129 10:24:54.308042  515467 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:24:54.349473  515467 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:24:54.353977  515467 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:24:54.354140  515467 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:24:54.383524  515467 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1129 10:24:54.383597  515467 start.go:496] detecting cgroup driver to use...
	I1129 10:24:54.383642  515467 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:24:54.383730  515467 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:24:54.401102  515467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:24:54.414388  515467 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:24:54.414454  515467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:24:54.432817  515467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:24:54.452541  515467 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:24:54.582451  515467 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:24:54.718134  515467 docker.go:234] disabling docker service ...
	I1129 10:24:54.718223  515467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:24:54.739656  515467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:24:54.754647  515467 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:24:54.877314  515467 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:24:54.996596  515467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:24:55.019462  515467 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:24:55.036198  515467 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:24:55.036314  515467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:55.046434  515467 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:24:55.046546  515467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:55.056782  515467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:55.066891  515467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:55.076801  515467 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:24:55.085696  515467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:55.095265  515467 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:55.109257  515467 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:55.118450  515467 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:24:55.125896  515467 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:24:55.133281  515467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:24:55.246685  515467 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:24:55.425914  515467 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:24:55.425995  515467 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:24:55.429741  515467 start.go:564] Will wait 60s for crictl version
	I1129 10:24:55.429817  515467 ssh_runner.go:195] Run: which crictl
	I1129 10:24:55.433301  515467 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:24:55.460069  515467 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:24:55.460170  515467 ssh_runner.go:195] Run: crio --version
	I1129 10:24:55.490044  515467 ssh_runner.go:195] Run: crio --version
	I1129 10:24:55.524573  515467 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:24:55.527434  515467 cli_runner.go:164] Run: docker network inspect newest-cni-156330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:24:55.542881  515467 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:24:55.546606  515467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:24:55.558910  515467 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1129 10:24:52.849758  516700 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-194354" ...
	I1129 10:24:52.849863  516700 cli_runner.go:164] Run: docker start default-k8s-diff-port-194354
	I1129 10:24:53.204834  516700 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:24:53.225349  516700 kic.go:430] container "default-k8s-diff-port-194354" state is running.
	I1129 10:24:53.225866  516700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-194354
	I1129 10:24:53.258746  516700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/config.json ...
	I1129 10:24:53.258970  516700 machine.go:94] provisionDockerMachine start ...
	I1129 10:24:53.259025  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:53.290209  516700 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:53.290527  516700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1129 10:24:53.290536  516700 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:24:53.291158  516700 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36264->127.0.0.1:33461: read: connection reset by peer
	I1129 10:24:56.450580  516700 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-194354
	
	I1129 10:24:56.450611  516700 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-194354"
	I1129 10:24:56.450687  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:56.473501  516700 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:56.473805  516700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1129 10:24:56.473817  516700 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-194354 && echo "default-k8s-diff-port-194354" | sudo tee /etc/hostname
	I1129 10:24:56.644483  516700 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-194354
	
	I1129 10:24:56.644591  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:56.666968  516700 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:56.667275  516700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1129 10:24:56.667293  516700 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-194354' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-194354/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-194354' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:24:56.839221  516700 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:24:56.839258  516700 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:24:56.839285  516700 ubuntu.go:190] setting up certificates
	I1129 10:24:56.839295  516700 provision.go:84] configureAuth start
	I1129 10:24:56.839384  516700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-194354
	I1129 10:24:56.863521  516700 provision.go:143] copyHostCerts
	I1129 10:24:56.863651  516700 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:24:56.863672  516700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:24:56.863748  516700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:24:56.863849  516700 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:24:56.863860  516700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:24:56.863887  516700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:24:56.863945  516700 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:24:56.863955  516700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:24:56.863979  516700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:24:56.864031  516700 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-194354 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-194354 localhost minikube]
	I1129 10:24:57.369913  516700 provision.go:177] copyRemoteCerts
	I1129 10:24:57.369986  516700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:24:57.370031  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:57.388539  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:24:57.498834  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:24:57.519822  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1129 10:24:57.541037  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:24:55.561665  515467 kubeadm.go:884] updating cluster {Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:24:55.561804  515467 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:24:55.561876  515467 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:24:55.593727  515467 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:24:55.593753  515467 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:24:55.593820  515467 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:24:55.623362  515467 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:24:55.623388  515467 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:24:55.623396  515467 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:24:55.623527  515467 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-156330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:24:55.623636  515467 ssh_runner.go:195] Run: crio config
	I1129 10:24:55.677253  515467 cni.go:84] Creating CNI manager for ""
	I1129 10:24:55.677277  515467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:24:55.677298  515467 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 10:24:55.677321  515467 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-156330 NodeName:newest-cni-156330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:24:55.677447  515467 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-156330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:24:55.677523  515467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:24:55.685386  515467 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:24:55.685490  515467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:24:55.694681  515467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:24:55.707282  515467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:24:55.720682  515467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1129 10:24:55.736031  515467 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:24:55.739685  515467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:24:55.749522  515467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:24:55.863245  515467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:24:55.880356  515467 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330 for IP: 192.168.76.2
	I1129 10:24:55.880392  515467 certs.go:195] generating shared ca certs ...
	I1129 10:24:55.880408  515467 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:24:55.880621  515467 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:24:55.880677  515467 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:24:55.880696  515467 certs.go:257] generating profile certs ...
	I1129 10:24:55.880756  515467 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.key
	I1129 10:24:55.880779  515467 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.crt with IP's: []
	I1129 10:24:56.040421  515467 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.crt ...
	I1129 10:24:56.040456  515467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.crt: {Name:mkc9cb4dac173ed71a3ec139ab47ae9f52226c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:24:56.040687  515467 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.key ...
	I1129 10:24:56.040702  515467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.key: {Name:mk061fc282103cc8d35c87d11b505bba940ed2a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:24:56.040803  515467 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key.fb07df16
	I1129 10:24:56.040821  515467 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt.fb07df16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 10:24:56.239503  515467 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt.fb07df16 ...
	I1129 10:24:56.239538  515467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt.fb07df16: {Name:mka4a5dcbb98a74cc7159fe92152cec898d3b48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:24:56.239726  515467 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key.fb07df16 ...
	I1129 10:24:56.239740  515467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key.fb07df16: {Name:mk5d16dcc0cd0106962c1c4c5c8d878933b50e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:24:56.239827  515467 certs.go:382] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt.fb07df16 -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt
	I1129 10:24:56.239908  515467 certs.go:386] copying /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key.fb07df16 -> /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key
	I1129 10:24:56.239971  515467 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key
	I1129 10:24:56.239992  515467 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.crt with IP's: []
	I1129 10:24:56.412698  515467 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.crt ...
	I1129 10:24:56.412731  515467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.crt: {Name:mke3e6b599c47a86a8d98ce0848787a3089269b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:24:56.412947  515467 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key ...
	I1129 10:24:56.412962  515467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key: {Name:mk448a8d490ddceafd2eed8330a648806705f8e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:24:56.413163  515467 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:24:56.413213  515467 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:24:56.413228  515467 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:24:56.413259  515467 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:24:56.413290  515467 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:24:56.413319  515467 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:24:56.413369  515467 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:24:56.413966  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:24:56.435298  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:24:56.460142  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:24:56.479711  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:24:56.497710  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:24:56.516850  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:24:56.542882  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:24:56.561442  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:24:56.580786  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:24:56.600380  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:24:56.618112  515467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:24:56.637185  515467 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:24:56.656600  515467 ssh_runner.go:195] Run: openssl version
	I1129 10:24:56.671930  515467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:24:56.680197  515467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:24:56.684365  515467 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:24:56.684428  515467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:24:56.755297  515467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:24:56.764482  515467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:24:56.778649  515467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:24:56.782279  515467 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:24:56.782418  515467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:24:56.825820  515467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:24:56.834472  515467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:24:56.845108  515467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:24:56.850214  515467 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:24:56.850277  515467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:24:56.893168  515467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:24:56.902247  515467 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:24:56.906853  515467 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 10:24:56.906906  515467 kubeadm.go:401] StartCluster: {Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:24:56.907001  515467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:24:56.907057  515467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:24:56.938262  515467 cri.go:89] found id: ""
	I1129 10:24:56.938328  515467 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:24:56.948554  515467 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 10:24:56.957068  515467 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 10:24:56.957127  515467 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 10:24:56.967915  515467 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 10:24:56.967983  515467 kubeadm.go:158] found existing configuration files:
	
	I1129 10:24:56.968063  515467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 10:24:56.976875  515467 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 10:24:56.976939  515467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 10:24:56.984647  515467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 10:24:56.993166  515467 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 10:24:56.993233  515467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 10:24:57.001083  515467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 10:24:57.011629  515467 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 10:24:57.011742  515467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 10:24:57.020408  515467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 10:24:57.029792  515467 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 10:24:57.029909  515467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 10:24:57.038014  515467 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 10:24:57.113798  515467 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1129 10:24:57.114169  515467 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1129 10:24:57.214174  515467 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 10:24:57.564101  516700 provision.go:87] duration metric: took 724.767643ms to configureAuth
	I1129 10:24:57.564180  516700 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:24:57.564475  516700 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:24:57.564638  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:57.595171  516700 main.go:143] libmachine: Using SSH client type: native
	I1129 10:24:57.595492  516700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1129 10:24:57.595506  516700 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:24:57.991138  516700 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:24:57.991157  516700 machine.go:97] duration metric: took 4.732178262s to provisionDockerMachine
	I1129 10:24:57.991168  516700 start.go:293] postStartSetup for "default-k8s-diff-port-194354" (driver="docker")
	I1129 10:24:57.991180  516700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:24:57.991240  516700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:24:57.991298  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:58.018318  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:24:58.126894  516700 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:24:58.130829  516700 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:24:58.130869  516700 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:24:58.130885  516700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:24:58.130956  516700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:24:58.131050  516700 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:24:58.131169  516700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:24:58.139451  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:24:58.163274  516700 start.go:296] duration metric: took 172.079961ms for postStartSetup
	I1129 10:24:58.163368  516700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:24:58.163442  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:58.193320  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:24:58.303303  516700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:24:58.308416  516700 fix.go:56] duration metric: took 5.490592491s for fixHost
	I1129 10:24:58.308440  516700 start.go:83] releasing machines lock for "default-k8s-diff-port-194354", held for 5.490652545s
	I1129 10:24:58.308517  516700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-194354
	I1129 10:24:58.327042  516700 ssh_runner.go:195] Run: cat /version.json
	I1129 10:24:58.327092  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:58.327120  516700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:24:58.327191  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:24:58.354958  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:24:58.371830  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:24:58.560512  516700 ssh_runner.go:195] Run: systemctl --version
	I1129 10:24:58.567670  516700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:24:58.614834  516700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:24:58.620011  516700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:24:58.620162  516700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:24:58.629077  516700 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:24:58.629162  516700 start.go:496] detecting cgroup driver to use...
	I1129 10:24:58.629228  516700 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:24:58.629312  516700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:24:58.645684  516700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:24:58.660236  516700 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:24:58.660361  516700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:24:58.677425  516700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:24:58.691547  516700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:24:58.836666  516700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:24:59.010039  516700 docker.go:234] disabling docker service ...
	I1129 10:24:59.010209  516700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:24:59.032001  516700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:24:59.056886  516700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:24:59.212076  516700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:24:59.357083  516700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:24:59.372824  516700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:24:59.388336  516700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:24:59.388450  516700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:59.397824  516700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:24:59.397942  516700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:59.408006  516700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:59.417718  516700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:59.427251  516700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:24:59.436686  516700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:59.446586  516700 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:59.455957  516700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:24:59.465358  516700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:24:59.474103  516700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:24:59.481972  516700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:24:59.625493  516700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:24:59.831565  516700 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:24:59.831759  516700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:24:59.836096  516700 start.go:564] Will wait 60s for crictl version
	I1129 10:24:59.836235  516700 ssh_runner.go:195] Run: which crictl
	I1129 10:24:59.840087  516700 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:24:59.866579  516700 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:24:59.866730  516700 ssh_runner.go:195] Run: crio --version
	I1129 10:24:59.902716  516700 ssh_runner.go:195] Run: crio --version
	I1129 10:24:59.950069  516700 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:24:59.953068  516700 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-194354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:24:59.975781  516700 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 10:24:59.980105  516700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:24:59.989566  516700 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-194354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-194354 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:24:59.989687  516700 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:24:59.989746  516700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:25:00.036656  516700 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:25:00.036685  516700 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:25:00.036753  516700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:25:00.084649  516700 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:25:00.084679  516700 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:25:00.084688  516700 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1129 10:25:00.084809  516700 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-194354 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-194354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:25:00.084926  516700 ssh_runner.go:195] Run: crio config
	I1129 10:25:00.176814  516700 cni.go:84] Creating CNI manager for ""
	I1129 10:25:00.176841  516700 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:00.176861  516700 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 10:25:00.176998  516700 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-194354 NodeName:default-k8s-diff-port-194354 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:25:00.177182  516700 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-194354"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:25:00.177280  516700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:25:00.191304  516700 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:25:00.191389  516700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:25:00.205871  516700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1129 10:25:00.232393  516700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:25:00.253871  516700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1129 10:25:00.271385  516700 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:25:00.276953  516700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:25:00.289619  516700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:00.496952  516700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:25:00.518726  516700 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354 for IP: 192.168.85.2
	I1129 10:25:00.518825  516700 certs.go:195] generating shared ca certs ...
	I1129 10:25:00.518865  516700 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:00.519098  516700 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:25:00.519191  516700 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:25:00.519228  516700 certs.go:257] generating profile certs ...
	I1129 10:25:00.519382  516700 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.key
	I1129 10:25:00.519506  516700 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/apiserver.key.4216ac30
	I1129 10:25:00.519595  516700 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/proxy-client.key
	I1129 10:25:00.519786  516700 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:25:00.519849  516700 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:25:00.519899  516700 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:25:00.519972  516700 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:25:00.520042  516700 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:25:00.520110  516700 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:25:00.520206  516700 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:25:00.521273  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:25:00.544864  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:25:00.577150  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:25:00.631976  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:25:00.671788  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1129 10:25:00.703102  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:25:00.725619  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:25:00.770148  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 10:25:00.807051  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:25:00.849877  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:25:00.891915  516700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:25:00.926590  516700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:25:00.957545  516700 ssh_runner.go:195] Run: openssl version
	I1129 10:25:00.983740  516700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:25:00.998600  516700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:01.003390  516700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:01.003467  516700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:01.047653  516700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:25:01.056476  516700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:25:01.065234  516700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:25:01.070918  516700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:25:01.070998  516700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:25:01.125641  516700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:25:01.134469  516700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:25:01.144212  516700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:25:01.149741  516700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:25:01.149814  516700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:25:01.195318  516700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:25:01.204019  516700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:25:01.208921  516700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:25:01.252956  516700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:25:01.300978  516700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:25:01.343989  516700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:25:01.414584  516700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:25:01.479427  516700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:25:01.578987  516700 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-194354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-194354 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:25:01.579078  516700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:25:01.579157  516700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:25:01.665931  516700 cri.go:89] found id: ""
	I1129 10:25:01.666004  516700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:25:01.688108  516700 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:25:01.688136  516700 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:25:01.688192  516700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:25:01.706329  516700 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:25:01.706737  516700 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-194354" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:01.706844  516700 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-194354" cluster setting kubeconfig missing "default-k8s-diff-port-194354" context setting]
	I1129 10:25:01.707171  516700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:01.708431  516700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:25:01.732542  516700 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 10:25:01.732577  516700 kubeadm.go:602] duration metric: took 44.434809ms to restartPrimaryControlPlane
	I1129 10:25:01.732587  516700 kubeadm.go:403] duration metric: took 153.610056ms to StartCluster
	I1129 10:25:01.732608  516700 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:01.732674  516700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:01.733314  516700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:01.733529  516700 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:25:01.733835  516700 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:01.733885  516700 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:25:01.733953  516700 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-194354"
	I1129 10:25:01.733973  516700 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-194354"
	W1129 10:25:01.733979  516700 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:25:01.734003  516700 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:25:01.734050  516700 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-194354"
	I1129 10:25:01.734089  516700 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-194354"
	W1129 10:25:01.734099  516700 addons.go:248] addon dashboard should already be in state true
	I1129 10:25:01.734123  516700 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:25:01.734497  516700 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:25:01.734567  516700 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:25:01.737658  516700 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-194354"
	I1129 10:25:01.739402  516700 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-194354"
	I1129 10:25:01.742177  516700 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:25:01.752991  516700 out.go:179] * Verifying Kubernetes components...
	I1129 10:25:01.756463  516700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:01.790216  516700 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:25:01.796225  516700 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:25:01.796399  516700 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:01.796412  516700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:25:01.796478  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:25:01.802278  516700 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 10:25:01.805725  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:25:01.805749  516700 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:25:01.805831  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:25:01.812641  516700 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-194354"
	W1129 10:25:01.812665  516700 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:25:01.812689  516700 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:25:01.813107  516700 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:25:01.856325  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:25:01.889931  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:25:01.902325  516700 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:01.902345  516700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:25:01.902409  516700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:25:01.936446  516700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:25:02.245383  516700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:25:02.327396  516700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:02.358578  516700 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:25:02.403186  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:25:02.403213  516700 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:25:02.439630  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:25:02.439652  516700 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:25:02.471351  516700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:02.555201  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:25:02.555227  516700 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:25:02.639717  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:25:02.639736  516700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:25:02.742442  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:25:02.742466  516700 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:25:02.860934  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:25:02.860956  516700 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:25:02.891583  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:25:02.891660  516700 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:25:02.931508  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:25:02.931582  516700 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:25:03.002389  516700 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:25:03.002473  516700 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:25:03.037403  516700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:25:11.195899  516700 node_ready.go:49] node "default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:11.195927  516700 node_ready.go:38] duration metric: took 8.837266433s for node "default-k8s-diff-port-194354" to be "Ready" ...
	I1129 10:25:11.195940  516700 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:25:11.196003  516700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:25:14.214594  516700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.887114532s)
	I1129 10:25:14.214803  516700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.743431183s)
	I1129 10:25:14.215132  516700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.177647719s)
	I1129 10:25:14.215420  516700 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.01940361s)
	I1129 10:25:14.215474  516700 api_server.go:72] duration metric: took 12.481903123s to wait for apiserver process to appear ...
	I1129 10:25:14.215494  516700 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:25:14.215540  516700 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1129 10:25:14.218460  516700 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-194354 addons enable metrics-server
	
	I1129 10:25:14.268070  516700 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1129 10:25:14.269280  516700 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1129 10:25:14.271148  516700 addons.go:530] duration metric: took 12.537261124s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1129 10:25:14.272689  516700 api_server.go:141] control plane version: v1.34.1
	I1129 10:25:14.272744  516700 api_server.go:131] duration metric: took 57.230941ms to wait for apiserver health ...
	I1129 10:25:14.272769  516700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:25:14.285497  516700 system_pods.go:59] 8 kube-system pods found
	I1129 10:25:14.285587  516700 system_pods.go:61] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:25:14.285613  516700 system_pods.go:61] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:25:14.285651  516700 system_pods.go:61] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:25:14.285681  516700 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:25:14.285704  516700 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:25:14.285742  516700 system_pods.go:61] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:25:14.285770  516700 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:25:14.285795  516700 system_pods.go:61] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:25:14.285831  516700 system_pods.go:74] duration metric: took 13.041222ms to wait for pod list to return data ...
	I1129 10:25:14.285860  516700 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:25:14.290068  516700 default_sa.go:45] found service account: "default"
	I1129 10:25:14.290158  516700 default_sa.go:55] duration metric: took 4.277876ms for default service account to be created ...
	I1129 10:25:14.290197  516700 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 10:25:14.297038  516700 system_pods.go:86] 8 kube-system pods found
	I1129 10:25:14.297120  516700 system_pods.go:89] "coredns-66bc5c9577-8rvzs" [cc43ce79-a4c7-4ed6-bdd9-ca9dc6925d64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 10:25:14.297145  516700 system_pods.go:89] "etcd-default-k8s-diff-port-194354" [3ea2b867-7a4c-4601-b8fe-0a7740bf6de2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:25:14.297168  516700 system_pods.go:89] "kindnet-7xnqr" [ff1a8b1a-a98a-4887-b839-3844c313dec0] Running
	I1129 10:25:14.297210  516700 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-194354" [e40ab5f1-1f8e-4774-bdc9-1a68ff780c06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:25:14.297231  516700 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-194354" [e37c544e-6277-4e91-9ee1-528153edfd63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:25:14.297268  516700 system_pods.go:89] "kube-proxy-68szw" [83d34300-8cc0-4093-a15c-44193588a880] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:25:14.297289  516700 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-194354" [0ca0016c-a1aa-4b91-aa79-9ea5d6b81db8] Running
	I1129 10:25:14.297326  516700 system_pods.go:89] "storage-provisioner" [037289ba-251c-44d8-890b-5015790b2440] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 10:25:14.297359  516700 system_pods.go:126] duration metric: took 7.124885ms to wait for k8s-apps to be running ...
	I1129 10:25:14.297387  516700 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 10:25:14.297475  516700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:25:14.366860  516700 system_svc.go:56] duration metric: took 69.463866ms WaitForService to wait for kubelet
	I1129 10:25:14.366948  516700 kubeadm.go:587] duration metric: took 12.6333855s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:25:14.366984  516700 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:25:14.382206  516700 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:25:14.382291  516700 node_conditions.go:123] node cpu capacity is 2
	I1129 10:25:14.382319  516700 node_conditions.go:105] duration metric: took 15.29953ms to run NodePressure ...
	I1129 10:25:14.382346  516700 start.go:242] waiting for startup goroutines ...
	I1129 10:25:14.382382  516700 start.go:247] waiting for cluster config update ...
	I1129 10:25:14.382412  516700 start.go:256] writing updated cluster config ...
	I1129 10:25:14.382772  516700 ssh_runner.go:195] Run: rm -f paused
	I1129 10:25:14.392301  516700 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:25:14.407821  516700 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 10:25:16.414837  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:20.375591  515467 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 10:25:20.375648  515467 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 10:25:20.375735  515467 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 10:25:20.375790  515467 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1129 10:25:20.375824  515467 kubeadm.go:319] OS: Linux
	I1129 10:25:20.375869  515467 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 10:25:20.375916  515467 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1129 10:25:20.375972  515467 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 10:25:20.376020  515467 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 10:25:20.376078  515467 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 10:25:20.376126  515467 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 10:25:20.376170  515467 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 10:25:20.376218  515467 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 10:25:20.376263  515467 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1129 10:25:20.376334  515467 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 10:25:20.376438  515467 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 10:25:20.376529  515467 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 10:25:20.376590  515467 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 10:25:20.381651  515467 out.go:252]   - Generating certificates and keys ...
	I1129 10:25:20.381742  515467 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 10:25:20.381816  515467 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 10:25:20.381884  515467 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 10:25:20.381946  515467 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 10:25:20.382012  515467 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 10:25:20.382063  515467 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 10:25:20.382166  515467 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 10:25:20.382296  515467 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-156330] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 10:25:20.382348  515467 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 10:25:20.382467  515467 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-156330] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 10:25:20.382531  515467 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 10:25:20.382594  515467 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 10:25:20.382638  515467 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 10:25:20.382694  515467 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 10:25:20.382749  515467 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 10:25:20.382807  515467 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 10:25:20.382868  515467 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 10:25:20.382933  515467 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 10:25:20.382987  515467 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 10:25:20.383068  515467 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 10:25:20.383133  515467 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 10:25:20.386473  515467 out.go:252]   - Booting up control plane ...
	I1129 10:25:20.386658  515467 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 10:25:20.386795  515467 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 10:25:20.386922  515467 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 10:25:20.387075  515467 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 10:25:20.387233  515467 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 10:25:20.387354  515467 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 10:25:20.387445  515467 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 10:25:20.387487  515467 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 10:25:20.387626  515467 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 10:25:20.387738  515467 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 10:25:20.387800  515467 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001092444s
	I1129 10:25:20.387899  515467 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 10:25:20.387986  515467 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 10:25:20.388080  515467 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 10:25:20.388164  515467 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 10:25:20.388244  515467 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.940666092s
	I1129 10:25:20.388314  515467 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.09023813s
	I1129 10:25:20.388387  515467 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.001358381s
	I1129 10:25:20.388499  515467 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 10:25:20.388638  515467 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 10:25:20.388709  515467 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 10:25:20.388907  515467 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-156330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 10:25:20.388967  515467 kubeadm.go:319] [bootstrap-token] Using token: 6ijjc7.ufvwtzz1gz0oiw4r
	I1129 10:25:20.392618  515467 out.go:252]   - Configuring RBAC rules ...
	I1129 10:25:20.392831  515467 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 10:25:20.392963  515467 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 10:25:20.393195  515467 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 10:25:20.393394  515467 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 10:25:20.393555  515467 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 10:25:20.393696  515467 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 10:25:20.393856  515467 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 10:25:20.393913  515467 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 10:25:20.393966  515467 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 10:25:20.393980  515467 kubeadm.go:319] 
	I1129 10:25:20.394049  515467 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 10:25:20.394056  515467 kubeadm.go:319] 
	I1129 10:25:20.394178  515467 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 10:25:20.394190  515467 kubeadm.go:319] 
	I1129 10:25:20.394216  515467 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 10:25:20.394305  515467 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 10:25:20.394364  515467 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 10:25:20.394373  515467 kubeadm.go:319] 
	I1129 10:25:20.394427  515467 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 10:25:20.394437  515467 kubeadm.go:319] 
	I1129 10:25:20.394485  515467 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 10:25:20.394490  515467 kubeadm.go:319] 
	I1129 10:25:20.394568  515467 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 10:25:20.394650  515467 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 10:25:20.394727  515467 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 10:25:20.394736  515467 kubeadm.go:319] 
	I1129 10:25:20.394821  515467 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 10:25:20.394902  515467 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 10:25:20.394911  515467 kubeadm.go:319] 
	I1129 10:25:20.395033  515467 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6ijjc7.ufvwtzz1gz0oiw4r \
	I1129 10:25:20.395206  515467 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 \
	I1129 10:25:20.395244  515467 kubeadm.go:319] 	--control-plane 
	I1129 10:25:20.395254  515467 kubeadm.go:319] 
	I1129 10:25:20.395340  515467 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 10:25:20.395351  515467 kubeadm.go:319] 
	I1129 10:25:20.395436  515467 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6ijjc7.ufvwtzz1gz0oiw4r \
	I1129 10:25:20.395562  515467 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:26ef02740eb010bdf63492cda75493837363c490611f954b521b77b94c1f3ca3 
	I1129 10:25:20.395573  515467 cni.go:84] Creating CNI manager for ""
	I1129 10:25:20.395580  515467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:20.403230  515467 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1129 10:25:18.415760  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	W1129 10:25:20.425747  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:20.406472  515467 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 10:25:20.415026  515467 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 10:25:20.415093  515467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 10:25:20.456310  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 10:25:20.954310  515467 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 10:25:20.954459  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:20.954530  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-156330 minikube.k8s.io/updated_at=2025_11_29T10_25_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=newest-cni-156330 minikube.k8s.io/primary=true
	I1129 10:25:21.338753  515467 ops.go:34] apiserver oom_adj: -16
	I1129 10:25:21.338867  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:21.839127  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:22.339292  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:22.839292  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:23.339210  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:23.839279  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:24.339286  515467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 10:25:24.579322  515467 kubeadm.go:1114] duration metric: took 3.624906612s to wait for elevateKubeSystemPrivileges
	I1129 10:25:24.579356  515467 kubeadm.go:403] duration metric: took 27.672457346s to StartCluster
	I1129 10:25:24.579375  515467 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:24.579451  515467 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:24.580451  515467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:24.580680  515467 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:25:24.580776  515467 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 10:25:24.581037  515467 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:24.581083  515467 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:25:24.581193  515467 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-156330"
	I1129 10:25:24.581213  515467 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-156330"
	I1129 10:25:24.581233  515467 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:24.581236  515467 addons.go:70] Setting default-storageclass=true in profile "newest-cni-156330"
	I1129 10:25:24.581286  515467 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-156330"
	I1129 10:25:24.581661  515467 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:24.582198  515467 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:24.585050  515467 out.go:179] * Verifying Kubernetes components...
	I1129 10:25:24.590703  515467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:24.626599  515467 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:25:24.630249  515467 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:24.630270  515467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:25:24.630300  515467 addons.go:239] Setting addon default-storageclass=true in "newest-cni-156330"
	I1129 10:25:24.630330  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:24.630333  515467 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:24.630756  515467 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:24.666620  515467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:24.678296  515467 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:24.678325  515467 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:25:24.678400  515467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:24.707707  515467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:25.278346  515467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:25.300174  515467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:25.527601  515467 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 10:25:25.527810  515467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:25:26.593657  515467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.293399652s)
	I1129 10:25:26.593787  515467 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.065936185s)
	I1129 10:25:26.594105  515467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315683659s)
	I1129 10:25:26.594198  515467 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.066522325s)
	I1129 10:25:26.594241  515467 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 10:25:26.594733  515467 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:25:26.594799  515467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:25:26.615741  515467 api_server.go:72] duration metric: took 2.03502398s to wait for apiserver process to appear ...
	I1129 10:25:26.615816  515467 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:25:26.615851  515467 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:26.651883  515467 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:25:26.653254  515467 api_server.go:141] control plane version: v1.34.1
	I1129 10:25:26.653280  515467 api_server.go:131] duration metric: took 37.443744ms to wait for apiserver health ...
	I1129 10:25:26.653289  515467 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:25:26.688044  515467 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 10:25:26.689295  515467 system_pods.go:59] 9 kube-system pods found
	I1129 10:25:26.689374  515467 system_pods.go:61] "coredns-66bc5c9577-75zjs" [02785e7d-f319-4f0f-96ee-6be35c299baa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 10:25:26.689397  515467 system_pods.go:61] "coredns-66bc5c9577-qmqkb" [17fb87a0-6829-48b1-8fec-653431fdffdc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 10:25:26.689436  515467 system_pods.go:61] "etcd-newest-cni-156330" [746d2e85-25b2-4bfc-a73f-0915d8ad139f] Running
	I1129 10:25:26.689473  515467 system_pods.go:61] "kindnet-pbbpw" [91c0b846-d32c-4a34-b86e-0a70463acf97] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 10:25:26.689495  515467 system_pods.go:61] "kube-apiserver-newest-cni-156330" [f22cc283-0f55-4963-b408-f0e6369fe13d] Running
	I1129 10:25:26.689530  515467 system_pods.go:61] "kube-controller-manager-newest-cni-156330" [48ed13b6-74ed-453d-b487-840731d8497f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:25:26.689557  515467 system_pods.go:61] "kube-proxy-7k5nl" [5066bedf-aec0-4cb1-b9da-7073ad77a358] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 10:25:26.689578  515467 system_pods.go:61] "kube-scheduler-newest-cni-156330" [0fece855-29cc-4724-a24a-eba2d26500e0] Running
	I1129 10:25:26.689613  515467 system_pods.go:61] "storage-provisioner" [5a6c22d4-57aa-45cd-9972-b81a1c2998a4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 10:25:26.689814  515467 system_pods.go:74] duration metric: took 36.517276ms to wait for pod list to return data ...
	I1129 10:25:26.689840  515467 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:25:26.693092  515467 addons.go:530] duration metric: took 2.111983058s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 10:25:26.719844  515467 default_sa.go:45] found service account: "default"
	I1129 10:25:26.719868  515467 default_sa.go:55] duration metric: took 29.992242ms for default service account to be created ...
	I1129 10:25:26.719881  515467 kubeadm.go:587] duration metric: took 2.139170082s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 10:25:26.719898  515467 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:25:26.760872  515467 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:25:26.760906  515467 node_conditions.go:123] node cpu capacity is 2
	I1129 10:25:26.760919  515467 node_conditions.go:105] duration metric: took 41.017284ms to run NodePressure ...
	I1129 10:25:26.760932  515467 start.go:242] waiting for startup goroutines ...
	I1129 10:25:27.098395  515467 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-156330" context rescaled to 1 replicas
	I1129 10:25:27.098422  515467 start.go:247] waiting for cluster config update ...
	I1129 10:25:27.098435  515467 start.go:256] writing updated cluster config ...
	I1129 10:25:27.098714  515467 ssh_runner.go:195] Run: rm -f paused
	I1129 10:25:27.182527  515467 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:25:27.188596  515467 out.go:179] * Done! kubectl is now configured to use "newest-cni-156330" cluster and "default" namespace by default
	W1129 10:25:22.913086  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	W1129 10:25:24.918580  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	W1129 10:25:27.420105  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.639200165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.664317403Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c0176184-891e-4c1f-983e-c91742c5fb20 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.677075323Z" level=info msg="Ran pod sandbox 0fde47d1dce48f124716ed4c30258b0842b69a1e237cdc6a8de316ad0b68f4b0 with infra container: kube-system/kindnet-pbbpw/POD" id=c0176184-891e-4c1f-983e-c91742c5fb20 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.690569679Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2249936c-6f60-47ee-9a5a-2cf06b017097 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.696420661Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=20a20017-186c-4382-9916-788bc28d41ba name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.711272325Z" level=info msg="Creating container: kube-system/kindnet-pbbpw/kindnet-cni" id=23bb7fb9-5a50-4b39-b1fb-d6da850f4199 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.711593987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.722188689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.722889555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.741795249Z" level=info msg="Created container fb04d8311de8889a9017ea2d3f72422d8246af53741250680b5b6de43e731ab0: kube-system/kindnet-pbbpw/kindnet-cni" id=23bb7fb9-5a50-4b39-b1fb-d6da850f4199 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.745792326Z" level=info msg="Starting container: fb04d8311de8889a9017ea2d3f72422d8246af53741250680b5b6de43e731ab0" id=f2355a2e-1c0b-4ad9-841f-28886eae822f name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.748294764Z" level=info msg="Started container" PID=1469 containerID=fb04d8311de8889a9017ea2d3f72422d8246af53741250680b5b6de43e731ab0 description=kube-system/kindnet-pbbpw/kindnet-cni id=f2355a2e-1c0b-4ad9-841f-28886eae822f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fde47d1dce48f124716ed4c30258b0842b69a1e237cdc6a8de316ad0b68f4b0
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.871342695Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-7k5nl/POD" id=6bc5a1e3-5024-4f80-9918-fb79d338281a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.871619432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.876242092Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6bc5a1e3-5024-4f80-9918-fb79d338281a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.886007345Z" level=info msg="Ran pod sandbox 93e4c244e15e74837f1d4a9abb8ec040cfb8a524a0b7a1376b99deadd6d5e647 with infra container: kube-system/kube-proxy-7k5nl/POD" id=6bc5a1e3-5024-4f80-9918-fb79d338281a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.887438098Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5e059303-8d95-4155-8328-057c51df7888 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.891375171Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=64070213-1ced-4bbb-a531-d012e63aea8e name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.900437096Z" level=info msg="Creating container: kube-system/kube-proxy-7k5nl/kube-proxy" id=ebd4b0f1-8760-4c09-a631-149fbbf03111 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.900680595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.906499601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.907204324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.938788256Z" level=info msg="Created container e1c7d6fd6739201270ab3f2d89d4b84e4d0359937c9a91455be1533df99bcd16: kube-system/kube-proxy-7k5nl/kube-proxy" id=ebd4b0f1-8760-4c09-a631-149fbbf03111 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.942773502Z" level=info msg="Starting container: e1c7d6fd6739201270ab3f2d89d4b84e4d0359937c9a91455be1533df99bcd16" id=abde34c8-d485-4b70-837a-a6359570ea27 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:26 newest-cni-156330 crio[836]: time="2025-11-29T10:25:26.952664934Z" level=info msg="Started container" PID=1480 containerID=e1c7d6fd6739201270ab3f2d89d4b84e4d0359937c9a91455be1533df99bcd16 description=kube-system/kube-proxy-7k5nl/kube-proxy id=abde34c8-d485-4b70-837a-a6359570ea27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93e4c244e15e74837f1d4a9abb8ec040cfb8a524a0b7a1376b99deadd6d5e647
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e1c7d6fd67392       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   93e4c244e15e7       kube-proxy-7k5nl                            kube-system
	fb04d8311de88       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   0fde47d1dce48       kindnet-pbbpw                               kube-system
	7b4af88fc4195       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago      Running             etcd                      0                   84c2d5f1ec9d3       etcd-newest-cni-156330                      kube-system
	28a37b0114998       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago      Running             kube-scheduler            0                   f5869df9df681       kube-scheduler-newest-cni-156330            kube-system
	a652f55c56c8a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago      Running             kube-controller-manager   0                   b8c1a5be7ac1b       kube-controller-manager-newest-cni-156330   kube-system
	cb543ad326336       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago      Running             kube-apiserver            0                   7b50dbcb289f4       kube-apiserver-newest-cni-156330            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-156330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-156330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=newest-cni-156330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_25_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:25:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-156330
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:25:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:25:20 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:25:20 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:25:20 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 29 Nov 2025 10:25:20 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-156330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                18f776bf-837e-4512-96d1-eca8626890e6
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-156330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-pbbpw                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5s
	  kube-system                 kube-apiserver-newest-cni-156330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-156330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-7k5nl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-scheduler-newest-cni-156330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 22s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node newest-cni-156330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node newest-cni-156330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x8 over 22s)  kubelet          Node newest-cni-156330 status is now: NodeHasSufficientPID
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-156330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-156330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-156330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-156330 event: Registered Node newest-cni-156330 in Controller
	
	
	==> dmesg <==
	[ +26.270780] overlayfs: idmapped layers are currently not supported
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	[Nov29 10:25] overlayfs: idmapped layers are currently not supported
	[  +6.600462] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7b4af88fc41955a5a41bffc180bf30ed3af4e020e1a5910569f31a108ef9b72d] <==
	{"level":"warn","ts":"2025-11-29T10:25:15.672450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.698552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.704093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.721515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.742897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.760534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.774485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.801638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.816927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.835179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.851693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.875672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.892452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.908626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.934892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.944178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.960259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:15.978030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:16.000709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:16.022684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:16.040999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:16.056361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:16.140921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51926","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T10:25:25.302048Z","caller":"traceutil/trace.go:172","msg":"trace[1346824596] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"103.994822ms","start":"2025-11-29T10:25:25.198025Z","end":"2025-11-29T10:25:25.302020Z","steps":["trace[1346824596] 'process raft request'  (duration: 86.855516ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T10:25:25.308866Z","caller":"traceutil/trace.go:172","msg":"trace[1713896979] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"110.721416ms","start":"2025-11-29T10:25:25.198113Z","end":"2025-11-29T10:25:25.308835Z","steps":["trace[1713896979] 'process raft request'  (duration: 91.899588ms)","trace[1713896979] 'compare'  (duration: 14.181568ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:25:29 up  3:07,  0 user,  load average: 6.04, 4.30, 3.07
	Linux newest-cni-156330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fb04d8311de8889a9017ea2d3f72422d8246af53741250680b5b6de43e731ab0] <==
	I1129 10:25:26.929766       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:25:26.930033       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:25:26.930190       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:25:26.930209       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:25:26.930220       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:25:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:25:27.126861       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:25:27.126891       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:25:27.126913       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:25:27.127066       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [cb543ad3263362be50a89dc77e692c5c46c1560a32c6469a7928330f139bc956] <==
	I1129 10:25:17.134361       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 10:25:17.134699       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:25:17.211187       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:25:17.211432       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 10:25:17.220124       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:25:17.225062       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:25:17.232861       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 10:25:17.320214       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:25:17.772866       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 10:25:17.778143       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 10:25:17.778244       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:25:18.772165       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:25:18.836736       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:25:18.943781       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 10:25:18.953546       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 10:25:18.955000       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:25:18.971325       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:25:19.024213       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:25:19.783989       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:25:19.813266       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 10:25:19.831061       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 10:25:24.788513       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:25:24.865758       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 10:25:25.413247       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:25:25.457825       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a652f55c56c8abc35da43e1e170f949bd9353e386da0d5b1f9451971f573599a] <==
	I1129 10:25:24.154063       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 10:25:24.154384       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:25:24.154802       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 10:25:24.154870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:25:24.155484       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:25:24.155583       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-156330"
	I1129 10:25:24.155654       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 10:25:24.155136       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 10:25:24.156663       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:25:24.156740       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:25:24.155342       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 10:25:24.164041       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:25:24.173615       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 10:25:24.177290       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:25:24.180606       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:25:24.187061       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 10:25:24.202441       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:25:24.204732       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:25:24.206929       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:25:24.207040       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 10:25:24.207220       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 10:25:24.207148       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 10:25:24.207183       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:25:24.211021       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:25:24.212529       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [e1c7d6fd6739201270ab3f2d89d4b84e4d0359937c9a91455be1533df99bcd16] <==
	I1129 10:25:27.010833       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:25:27.113431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:25:27.225976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:25:27.226008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:25:27.229027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:25:27.397201       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:25:27.397322       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:25:27.401570       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:25:27.401935       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:25:27.402197       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:27.411956       1 config.go:200] "Starting service config controller"
	I1129 10:25:27.411977       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:25:27.412002       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:25:27.412007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:25:27.412033       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:25:27.412037       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:25:27.412661       1 config.go:309] "Starting node config controller"
	I1129 10:25:27.412668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:25:27.412674       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:25:27.515568       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:25:27.552203       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:25:27.552295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [28a37b01149984d189a2a7b9ddbbe63f236fffa7997890d603a243a0aeee4320] <==
	E1129 10:25:17.045606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:25:17.045712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:25:17.045805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 10:25:17.046052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:25:17.050658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:25:17.050761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 10:25:17.050843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:25:17.050909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 10:25:17.050982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 10:25:17.051055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 10:25:17.051206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:25:17.051276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:25:17.051342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:25:17.051460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:25:17.051522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:25:17.893098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:25:17.966286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:25:18.014182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:25:18.057976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:25:18.071833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:25:18.101504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:25:18.119244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:25:18.224258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 10:25:18.224812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1129 10:25:21.005867       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:25:21 newest-cni-156330 kubelet[1294]: I1129 10:25:21.279124    1294 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-156330"
	Nov 29 10:25:21 newest-cni-156330 kubelet[1294]: E1129 10:25:21.323757    1294 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-156330\" already exists" pod="kube-system/kube-apiserver-newest-cni-156330"
	Nov 29 10:25:21 newest-cni-156330 kubelet[1294]: I1129 10:25:21.361715    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-156330" podStartSLOduration=1.3616953760000001 podStartE2EDuration="1.361695376s" podCreationTimestamp="2025-11-29 10:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:25:21.314050665 +0000 UTC m=+1.584665055" watchObservedRunningTime="2025-11-29 10:25:21.361695376 +0000 UTC m=+1.632309765"
	Nov 29 10:25:21 newest-cni-156330 kubelet[1294]: I1129 10:25:21.393871    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-156330" podStartSLOduration=1.393851057 podStartE2EDuration="1.393851057s" podCreationTimestamp="2025-11-29 10:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:25:21.361865568 +0000 UTC m=+1.632479966" watchObservedRunningTime="2025-11-29 10:25:21.393851057 +0000 UTC m=+1.664465447"
	Nov 29 10:25:21 newest-cni-156330 kubelet[1294]: I1129 10:25:21.471255    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-156330" podStartSLOduration=1.471235354 podStartE2EDuration="1.471235354s" podCreationTimestamp="2025-11-29 10:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:25:21.394326115 +0000 UTC m=+1.664940505" watchObservedRunningTime="2025-11-29 10:25:21.471235354 +0000 UTC m=+1.741849752"
	Nov 29 10:25:21 newest-cni-156330 kubelet[1294]: I1129 10:25:21.509786    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-156330" podStartSLOduration=1.50976698 podStartE2EDuration="1.50976698s" podCreationTimestamp="2025-11-29 10:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:25:21.472200231 +0000 UTC m=+1.742814637" watchObservedRunningTime="2025-11-29 10:25:21.50976698 +0000 UTC m=+1.780381386"
	Nov 29 10:25:24 newest-cni-156330 kubelet[1294]: I1129 10:25:24.082382    1294 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 29 10:25:24 newest-cni-156330 kubelet[1294]: I1129 10:25:24.083711    1294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: E1129 10:25:25.148199    1294 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-7k5nl\" is forbidden: User \"system:node:newest-cni-156330\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-156330' and this object" podUID="5066bedf-aec0-4cb1-b9da-7073ad77a358" pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: E1129 10:25:25.148449    1294 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-156330\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-156330' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: E1129 10:25:25.148523    1294 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-156330\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-156330' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171086    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-xtables-lock\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171157    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5066bedf-aec0-4cb1-b9da-7073ad77a358-lib-modules\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171182    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwzmj\" (UniqueName: \"kubernetes.io/projected/5066bedf-aec0-4cb1-b9da-7073ad77a358-kube-api-access-lwzmj\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171204    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5066bedf-aec0-4cb1-b9da-7073ad77a358-kube-proxy\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171220    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-cni-cfg\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171237    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-lib-modules\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171252    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4h5b\" (UniqueName: \"kubernetes.io/projected/91c0b846-d32c-4a34-b86e-0a70463acf97-kube-api-access-w4h5b\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:25 newest-cni-156330 kubelet[1294]: I1129 10:25:25.171271    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5066bedf-aec0-4cb1-b9da-7073ad77a358-xtables-lock\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:26 newest-cni-156330 kubelet[1294]: E1129 10:25:26.276788    1294 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:26 newest-cni-156330 kubelet[1294]: E1129 10:25:26.276908    1294 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5066bedf-aec0-4cb1-b9da-7073ad77a358-kube-proxy podName:5066bedf-aec0-4cb1-b9da-7073ad77a358 nodeName:}" failed. No retries permitted until 2025-11-29 10:25:26.776881152 +0000 UTC m=+7.047495541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5066bedf-aec0-4cb1-b9da-7073ad77a358-kube-proxy") pod "kube-proxy-7k5nl" (UID: "5066bedf-aec0-4cb1-b9da-7073ad77a358") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:26 newest-cni-156330 kubelet[1294]: I1129 10:25:26.448066    1294 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 10:25:26 newest-cni-156330 kubelet[1294]: W1129 10:25:26.883862    1294 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/crio-93e4c244e15e74837f1d4a9abb8ec040cfb8a524a0b7a1376b99deadd6d5e647 WatchSource:0}: Error finding container 93e4c244e15e74837f1d4a9abb8ec040cfb8a524a0b7a1376b99deadd6d5e647: Status 404 returned error can't find the container with id 93e4c244e15e74837f1d4a9abb8ec040cfb8a524a0b7a1376b99deadd6d5e647
	Nov 29 10:25:27 newest-cni-156330 kubelet[1294]: I1129 10:25:27.403331    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7k5nl" podStartSLOduration=3.403306993 podStartE2EDuration="3.403306993s" podCreationTimestamp="2025-11-29 10:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:25:27.325465599 +0000 UTC m=+7.596079997" watchObservedRunningTime="2025-11-29 10:25:27.403306993 +0000 UTC m=+7.673921383"
	Nov 29 10:25:27 newest-cni-156330 kubelet[1294]: I1129 10:25:27.554931    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pbbpw" podStartSLOduration=3.554910242 podStartE2EDuration="3.554910242s" podCreationTimestamp="2025-11-29 10:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 10:25:27.410775997 +0000 UTC m=+7.681390403" watchObservedRunningTime="2025-11-29 10:25:27.554910242 +0000 UTC m=+7.825524632"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-156330 -n newest-cni-156330
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-156330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qmqkb storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner: exit status 1 (119.306216ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qmqkb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-156330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-156330 --alsologtostderr -v=1: exit status 80 (1.647709746s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-156330 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 10:25:49.187251  522649 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:25:49.187858  522649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:49.187938  522649 out.go:374] Setting ErrFile to fd 2...
	I1129 10:25:49.187962  522649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:49.188286  522649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:25:49.188579  522649 out.go:368] Setting JSON to false
	I1129 10:25:49.188620  522649 mustload.go:66] Loading cluster: newest-cni-156330
	I1129 10:25:49.189894  522649 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:49.190523  522649 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:49.223822  522649 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:49.224170  522649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:49.330535  522649 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 10:25:49.310590424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:49.331186  522649 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-156330 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 10:25:49.334774  522649 out.go:179] * Pausing node newest-cni-156330 ... 
	I1129 10:25:49.338771  522649 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:49.339220  522649 ssh_runner.go:195] Run: systemctl --version
	I1129 10:25:49.339275  522649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:49.359974  522649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:49.482574  522649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:25:49.497277  522649 pause.go:52] kubelet running: true
	I1129 10:25:49.497349  522649 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:25:49.717357  522649 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:25:49.717444  522649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:25:49.795322  522649 cri.go:89] found id: "20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199"
	I1129 10:25:49.795346  522649 cri.go:89] found id: "327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e"
	I1129 10:25:49.795352  522649 cri.go:89] found id: "3069c57ee217c95d03dec233c09e477ed666d0956cde1ef28760ffbbce286d95"
	I1129 10:25:49.795360  522649 cri.go:89] found id: "7181c5b13ec162fb7288badb5924c819c3d78742e99f14f89d596e89d4079270"
	I1129 10:25:49.795364  522649 cri.go:89] found id: "7ca537fbb625a78664aa04f54caa6ecd30cebe66333e5ca9a85bd03f1ba23c61"
	I1129 10:25:49.795368  522649 cri.go:89] found id: "627ffe7d66e1492e6acca7d037b758cd87d7d478036bd0f90d38c255209293ec"
	I1129 10:25:49.795371  522649 cri.go:89] found id: ""
	I1129 10:25:49.795424  522649 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:25:49.806632  522649 retry.go:31] will retry after 138.546022ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:25:49Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:25:49.946141  522649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:25:49.960269  522649 pause.go:52] kubelet running: false
	I1129 10:25:49.960395  522649 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:25:50.162841  522649 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:25:50.162989  522649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:25:50.237582  522649 cri.go:89] found id: "20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199"
	I1129 10:25:50.237656  522649 cri.go:89] found id: "327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e"
	I1129 10:25:50.237685  522649 cri.go:89] found id: "3069c57ee217c95d03dec233c09e477ed666d0956cde1ef28760ffbbce286d95"
	I1129 10:25:50.237704  522649 cri.go:89] found id: "7181c5b13ec162fb7288badb5924c819c3d78742e99f14f89d596e89d4079270"
	I1129 10:25:50.237724  522649 cri.go:89] found id: "7ca537fbb625a78664aa04f54caa6ecd30cebe66333e5ca9a85bd03f1ba23c61"
	I1129 10:25:50.237761  522649 cri.go:89] found id: "627ffe7d66e1492e6acca7d037b758cd87d7d478036bd0f90d38c255209293ec"
	I1129 10:25:50.237778  522649 cri.go:89] found id: ""
	I1129 10:25:50.237860  522649 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:25:50.250290  522649 retry.go:31] will retry after 257.694456ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:25:50Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:25:50.508836  522649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:25:50.522681  522649 pause.go:52] kubelet running: false
	I1129 10:25:50.522797  522649 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:25:50.659749  522649 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:25:50.659830  522649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:25:50.731822  522649 cri.go:89] found id: "20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199"
	I1129 10:25:50.731846  522649 cri.go:89] found id: "327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e"
	I1129 10:25:50.731850  522649 cri.go:89] found id: "3069c57ee217c95d03dec233c09e477ed666d0956cde1ef28760ffbbce286d95"
	I1129 10:25:50.731854  522649 cri.go:89] found id: "7181c5b13ec162fb7288badb5924c819c3d78742e99f14f89d596e89d4079270"
	I1129 10:25:50.731857  522649 cri.go:89] found id: "7ca537fbb625a78664aa04f54caa6ecd30cebe66333e5ca9a85bd03f1ba23c61"
	I1129 10:25:50.731860  522649 cri.go:89] found id: "627ffe7d66e1492e6acca7d037b758cd87d7d478036bd0f90d38c255209293ec"
	I1129 10:25:50.731864  522649 cri.go:89] found id: ""
	I1129 10:25:50.731929  522649 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:25:50.747012  522649 out.go:203] 
	W1129 10:25:50.750033  522649 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:25:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:25:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 10:25:50.750057  522649 out.go:285] * 
	* 
	W1129 10:25:50.757336  522649 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 10:25:50.760416  522649 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-156330 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-156330
helpers_test.go:243: (dbg) docker inspect newest-cni-156330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275",
	        "Created": "2025-11-29T10:24:48.208994014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 520910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:25:33.760702241Z",
	            "FinishedAt": "2025-11-29T10:25:32.942412191Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/hostname",
	        "HostsPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/hosts",
	        "LogPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275-json.log",
	        "Name": "/newest-cni-156330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-156330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-156330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275",
	                "LowerDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-156330",
	                "Source": "/var/lib/docker/volumes/newest-cni-156330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-156330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-156330",
	                "name.minikube.sigs.k8s.io": "newest-cni-156330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d01ae1b086574f4cc9925758ff574b20eac96488bbe18dd3b136a457fc1a2cc6",
	            "SandboxKey": "/var/run/docker/netns/d01ae1b08657",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-156330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:51:64:3c:9a:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "296cf76a04b7032c7fa82b79716bf37121a065fecc07315bcd2905590381d495",
	                    "EndpointID": "d50a0e6da3cf3e7af88a434e1798181f8b3905fc9ebcf4cbd16678ab5b650c0b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-156330",
	                        "3766eb449434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330: exit status 2 (384.9885ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-156330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-156330 logs -n 25: (1.079823954s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                                                                                                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:23 UTC │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                                                                                                    │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ stop    │ -p default-k8s-diff-port-194354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-194354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ stop    │ -p newest-cni-156330 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-156330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ image   │ newest-cni-156330 image list --format=json                                                                                                                                                                                                    │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ pause   │ -p newest-cni-156330 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:25:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:25:33.483205  520783 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:25:33.483583  520783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:33.483631  520783 out.go:374] Setting ErrFile to fd 2...
	I1129 10:25:33.483653  520783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:33.484035  520783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:25:33.484531  520783 out.go:368] Setting JSON to false
	I1129 10:25:33.485554  520783 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11283,"bootTime":1764400651,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:25:33.485686  520783 start.go:143] virtualization:  
	I1129 10:25:33.488918  520783 out.go:179] * [newest-cni-156330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:25:33.492830  520783 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:25:33.492941  520783 notify.go:221] Checking for updates...
	I1129 10:25:33.500888  520783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:25:33.504957  520783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:33.507822  520783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:25:33.510751  520783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:25:33.513658  520783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:25:33.516942  520783 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:33.517574  520783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:25:33.551932  520783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:25:33.552052  520783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:33.616915  520783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:33.606626781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:33.617021  520783 docker.go:319] overlay module found
	I1129 10:25:33.620239  520783 out.go:179] * Using the docker driver based on existing profile
	I1129 10:25:33.623172  520783 start.go:309] selected driver: docker
	I1129 10:25:33.623195  520783 start.go:927] validating driver "docker" against &{Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:25:33.623311  520783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:25:33.624057  520783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:33.678713  520783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:33.669822783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:33.679079  520783 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 10:25:33.679114  520783 cni.go:84] Creating CNI manager for ""
	I1129 10:25:33.679178  520783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:33.679218  520783 start.go:353] cluster config:
	{Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:25:33.682378  520783 out.go:179] * Starting "newest-cni-156330" primary control-plane node in "newest-cni-156330" cluster
	I1129 10:25:33.685163  520783 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:25:33.688064  520783 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:25:33.690959  520783 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:33.691038  520783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:25:33.691050  520783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:25:33.691069  520783 cache.go:65] Caching tarball of preloaded images
	I1129 10:25:33.691186  520783 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:25:33.691196  520783 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:25:33.691349  520783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/config.json ...
	I1129 10:25:33.710490  520783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:25:33.710514  520783 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:25:33.710528  520783 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:25:33.710559  520783 start.go:360] acquireMachinesLock for newest-cni-156330: {Name:mk0b8f68121a1d050dcf1381cb60e275f8c46ccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:25:33.710618  520783 start.go:364] duration metric: took 39.139µs to acquireMachinesLock for "newest-cni-156330"
	I1129 10:25:33.710643  520783 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:25:33.710649  520783 fix.go:54] fixHost starting: 
	I1129 10:25:33.710925  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:33.727465  520783 fix.go:112] recreateIfNeeded on newest-cni-156330: state=Stopped err=<nil>
	W1129 10:25:33.727510  520783 fix.go:138] unexpected machine state, will restart: <nil>
	W1129 10:25:34.414381  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	W1129 10:25:36.914501  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:33.730676  520783 out.go:252] * Restarting existing docker container for "newest-cni-156330" ...
	I1129 10:25:33.730758  520783 cli_runner.go:164] Run: docker start newest-cni-156330
	I1129 10:25:33.982936  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:34.010005  520783 kic.go:430] container "newest-cni-156330" state is running.
	I1129 10:25:34.010506  520783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:25:34.029606  520783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/config.json ...
	I1129 10:25:34.029840  520783 machine.go:94] provisionDockerMachine start ...
	I1129 10:25:34.029904  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:34.049369  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:34.049854  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:34.049909  520783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:25:34.050741  520783 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47338->127.0.0.1:33466: read: connection reset by peer
	I1129 10:25:37.202620  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-156330
	
	I1129 10:25:37.202646  520783 ubuntu.go:182] provisioning hostname "newest-cni-156330"
	I1129 10:25:37.202711  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.220848  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:37.221166  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:37.221182  520783 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-156330 && echo "newest-cni-156330" | sudo tee /etc/hostname
	I1129 10:25:37.383805  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-156330
	
	I1129 10:25:37.383884  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.402428  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:37.402750  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:37.402771  520783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-156330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-156330/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-156330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:25:37.558676  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:25:37.558704  520783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:25:37.558735  520783 ubuntu.go:190] setting up certificates
	I1129 10:25:37.558745  520783 provision.go:84] configureAuth start
	I1129 10:25:37.558839  520783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:25:37.580706  520783 provision.go:143] copyHostCerts
	I1129 10:25:37.580784  520783 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:25:37.580802  520783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:25:37.580883  520783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:25:37.580993  520783 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:25:37.581005  520783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:25:37.581035  520783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:25:37.581111  520783 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:25:37.581120  520783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:25:37.581147  520783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:25:37.581207  520783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.newest-cni-156330 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-156330]
	I1129 10:25:37.790579  520783 provision.go:177] copyRemoteCerts
	I1129 10:25:37.790654  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:25:37.790695  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.811513  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:37.919783  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:25:37.938941  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:25:37.958315  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:25:37.977177  520783 provision.go:87] duration metric: took 418.401144ms to configureAuth
	I1129 10:25:37.977211  520783 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:25:37.977417  520783 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:37.977529  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.996004  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:37.996325  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:37.996339  520783 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:25:38.347426  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:25:38.347454  520783 machine.go:97] duration metric: took 4.31760058s to provisionDockerMachine
	I1129 10:25:38.347467  520783 start.go:293] postStartSetup for "newest-cni-156330" (driver="docker")
	I1129 10:25:38.347478  520783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:25:38.347548  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:25:38.347594  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.368597  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.473826  520783 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:25:38.477470  520783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:25:38.477504  520783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:25:38.477517  520783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:25:38.477574  520783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:25:38.477668  520783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:25:38.477785  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:25:38.486187  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:25:38.505500  520783 start.go:296] duration metric: took 158.018326ms for postStartSetup
	I1129 10:25:38.505584  520783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:25:38.505630  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.522522  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.623173  520783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:25:38.627993  520783 fix.go:56] duration metric: took 4.91733716s for fixHost
	I1129 10:25:38.628021  520783 start.go:83] releasing machines lock for "newest-cni-156330", held for 4.917387459s
	I1129 10:25:38.628093  520783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:25:38.645921  520783 ssh_runner.go:195] Run: cat /version.json
	I1129 10:25:38.645972  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.646395  520783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:25:38.646465  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.667514  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.675692  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.769811  520783 ssh_runner.go:195] Run: systemctl --version
	I1129 10:25:38.858821  520783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:25:38.900675  520783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:25:38.905662  520783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:25:38.905766  520783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:25:38.915722  520783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:25:38.915775  520783 start.go:496] detecting cgroup driver to use...
	I1129 10:25:38.915830  520783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:25:38.915915  520783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:25:38.931285  520783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:25:38.944454  520783 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:25:38.944525  520783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:25:38.959999  520783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:25:38.973085  520783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:25:39.117045  520783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:25:39.235180  520783 docker.go:234] disabling docker service ...
	I1129 10:25:39.235251  520783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:25:39.250302  520783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:25:39.263825  520783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:25:39.378495  520783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:25:39.513169  520783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:25:39.527265  520783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:25:39.544003  520783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:25:39.544116  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.553026  520783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:25:39.553116  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.562470  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.571696  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.581395  520783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:25:39.590422  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.599869  520783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.608384  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.616823  520783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:25:39.624346  520783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:25:39.631767  520783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:39.740182  520783 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:25:39.906224  520783 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:25:39.906306  520783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:25:39.914550  520783 start.go:564] Will wait 60s for crictl version
	I1129 10:25:39.914800  520783 ssh_runner.go:195] Run: which crictl
	I1129 10:25:39.918896  520783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:25:39.944208  520783 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:25:39.944295  520783 ssh_runner.go:195] Run: crio --version
	I1129 10:25:39.973633  520783 ssh_runner.go:195] Run: crio --version
	I1129 10:25:40.018265  520783 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:25:40.021305  520783 cli_runner.go:164] Run: docker network inspect newest-cni-156330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:25:40.044128  520783 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:25:40.048584  520783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:25:40.062906  520783 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1129 10:25:40.065847  520783 kubeadm.go:884] updating cluster {Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:25:40.066003  520783 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:40.066091  520783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:25:40.107358  520783 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:25:40.107388  520783 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:25:40.107454  520783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:25:40.134833  520783 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:25:40.134859  520783 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:25:40.134869  520783 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:25:40.134974  520783 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-156330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:25:40.135059  520783 ssh_runner.go:195] Run: crio config
	I1129 10:25:40.214880  520783 cni.go:84] Creating CNI manager for ""
	I1129 10:25:40.214950  520783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:40.214974  520783 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 10:25:40.215000  520783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-156330 NodeName:newest-cni-156330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:25:40.215131  520783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-156330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:25:40.215209  520783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:25:40.224557  520783 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:25:40.224711  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:25:40.233467  520783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:25:40.247553  520783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:25:40.261447  520783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1129 10:25:40.273894  520783 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:25:40.277743  520783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:25:40.287578  520783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:40.418326  520783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:25:40.441072  520783 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330 for IP: 192.168.76.2
	I1129 10:25:40.441097  520783 certs.go:195] generating shared ca certs ...
	I1129 10:25:40.441116  520783 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:40.441268  520783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:25:40.441326  520783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:25:40.441339  520783 certs.go:257] generating profile certs ...
	I1129 10:25:40.441437  520783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.key
	I1129 10:25:40.441513  520783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key.fb07df16
	I1129 10:25:40.441559  520783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key
	I1129 10:25:40.441689  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:25:40.441733  520783 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:25:40.441748  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:25:40.441778  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:25:40.441812  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:25:40.441844  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:25:40.441894  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:25:40.442583  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:25:40.465528  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:25:40.492229  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:25:40.519917  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:25:40.542602  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:25:40.565150  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:25:40.595023  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:25:40.620722  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:25:40.642181  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:25:40.666954  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:25:40.687508  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:25:40.708381  520783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:25:40.723068  520783 ssh_runner.go:195] Run: openssl version
	I1129 10:25:40.729600  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:25:40.738061  520783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:25:40.741926  520783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:25:40.742019  520783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:25:40.785211  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:25:40.795090  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:25:40.804437  520783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:40.808277  520783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:40.808384  520783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:40.850407  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:25:40.858554  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:25:40.867207  520783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:25:40.873779  520783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:25:40.873855  520783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:25:40.916768  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:25:40.925004  520783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:25:40.929215  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:25:40.970773  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:25:41.011635  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:25:41.052702  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:25:41.123188  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:25:41.179116  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:25:41.269345  520783 kubeadm.go:401] StartCluster: {Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:25:41.269437  520783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:25:41.269498  520783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:25:41.336436  520783 cri.go:89] found id: "3069c57ee217c95d03dec233c09e477ed666d0956cde1ef28760ffbbce286d95"
	I1129 10:25:41.336507  520783 cri.go:89] found id: "7181c5b13ec162fb7288badb5924c819c3d78742e99f14f89d596e89d4079270"
	I1129 10:25:41.336527  520783 cri.go:89] found id: "7ca537fbb625a78664aa04f54caa6ecd30cebe66333e5ca9a85bd03f1ba23c61"
	I1129 10:25:41.336547  520783 cri.go:89] found id: "627ffe7d66e1492e6acca7d037b758cd87d7d478036bd0f90d38c255209293ec"
	I1129 10:25:41.336599  520783 cri.go:89] found id: ""
	I1129 10:25:41.336696  520783 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:25:41.353351  520783 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:25:41Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:25:41.353481  520783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:25:41.368416  520783 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:25:41.368496  520783 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:25:41.368580  520783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:25:41.386362  520783 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:25:41.387001  520783 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-156330" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:41.387351  520783 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-156330" cluster setting kubeconfig missing "newest-cni-156330" context setting]
	I1129 10:25:41.387944  520783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:41.389765  520783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:25:41.414349  520783 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 10:25:41.414428  520783 kubeadm.go:602] duration metric: took 45.91336ms to restartPrimaryControlPlane
	I1129 10:25:41.414453  520783 kubeadm.go:403] duration metric: took 145.118759ms to StartCluster
	I1129 10:25:41.414499  520783 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:41.414588  520783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:41.415560  520783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:41.415848  520783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:25:41.416273  520783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:25:41.416361  520783 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-156330"
	I1129 10:25:41.416376  520783 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-156330"
	W1129 10:25:41.416383  520783 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:25:41.416405  520783 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:41.417100  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.417508  520783 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:41.417596  520783 addons.go:70] Setting dashboard=true in profile "newest-cni-156330"
	I1129 10:25:41.417628  520783 addons.go:239] Setting addon dashboard=true in "newest-cni-156330"
	W1129 10:25:41.417658  520783 addons.go:248] addon dashboard should already be in state true
	I1129 10:25:41.417705  520783 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:41.418282  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.425668  520783 out.go:179] * Verifying Kubernetes components...
	I1129 10:25:41.425932  520783 addons.go:70] Setting default-storageclass=true in profile "newest-cni-156330"
	I1129 10:25:41.425951  520783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-156330"
	I1129 10:25:41.426390  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.429930  520783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:41.475428  520783 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:25:41.475516  520783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:25:41.479430  520783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:41.479455  520783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:25:41.479524  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:41.484773  520783 addons.go:239] Setting addon default-storageclass=true in "newest-cni-156330"
	W1129 10:25:41.484796  520783 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:25:41.484821  520783 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:41.485016  520783 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1129 10:25:38.915065  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	W1129 10:25:40.915459  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:41.485256  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.493190  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:25:41.493227  520783 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:25:41.493297  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:41.522275  520783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:41.522299  520783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:25:41.522372  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:41.547766  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:41.567503  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:41.569956  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:41.774140  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:25:41.774179  520783 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:25:41.785495  520783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:25:41.819728  520783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:41.828913  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:25:41.828941  520783 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:25:41.848560  520783 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:25:41.848646  520783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:25:41.891157  520783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:41.903236  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:25:41.903264  520783 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:25:41.973868  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:25:41.973893  520783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:25:42.039424  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:25:42.039452  520783 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:25:42.115754  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:25:42.115783  520783 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:25:42.162711  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:25:42.162742  520783 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:25:42.197631  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:25:42.197676  520783 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:25:42.229458  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:25:42.229489  520783 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:25:42.261569  520783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1129 10:25:43.414781  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:45.418677  516700 pod_ready.go:94] pod "coredns-66bc5c9577-8rvzs" is "Ready"
	I1129 10:25:45.418707  516700 pod_ready.go:86] duration metric: took 31.010809135s for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.434808  516700 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.448724  516700 pod_ready.go:94] pod "etcd-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:45.448749  516700 pod_ready.go:86] duration metric: took 13.913233ms for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.529798  516700 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.536477  516700 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:45.536510  516700 pod_ready.go:86] duration metric: took 6.688629ms for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.540499  516700 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.611836  516700 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:45.611861  516700 pod_ready.go:86] duration metric: took 71.338335ms for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.811252  516700 pod_ready.go:83] waiting for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.211857  516700 pod_ready.go:94] pod "kube-proxy-68szw" is "Ready"
	I1129 10:25:46.211879  516700 pod_ready.go:86] duration metric: took 400.603709ms for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.410921  516700 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.811289  516700 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:46.811313  516700 pod_ready.go:86] duration metric: took 400.370524ms for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.811327  516700 pod_ready.go:40] duration metric: took 32.418940923s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:25:46.910969  516700 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:25:46.914845  516700 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-194354" cluster and "default" namespace by default
	I1129 10:25:46.166209  520783 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.317535021s)
	I1129 10:25:46.166246  520783 api_server.go:72] duration metric: took 4.750339473s to wait for apiserver process to appear ...
	I1129 10:25:46.166252  520783 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:25:46.166275  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:46.167052  520783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.347291093s)
	I1129 10:25:46.208206  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:46.208251  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:46.666547  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:46.707704  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:46.707739  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:47.166494  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:47.199790  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:47.199836  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:47.667268  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:47.708336  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:47.708364  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:47.823521  520783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.932326168s)
	I1129 10:25:47.979461  520783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.717842144s)
	I1129 10:25:47.982802  520783 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-156330 addons enable metrics-server
	
	I1129 10:25:47.985849  520783 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1129 10:25:47.988738  520783 addons.go:530] duration metric: took 6.572462569s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1129 10:25:48.166717  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:48.177776  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:25:48.178945  520783 api_server.go:141] control plane version: v1.34.1
	I1129 10:25:48.178980  520783 api_server.go:131] duration metric: took 2.012720035s to wait for apiserver health ...
	I1129 10:25:48.178990  520783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:25:48.183513  520783 system_pods.go:59] 8 kube-system pods found
	I1129 10:25:48.183559  520783 system_pods.go:61] "coredns-66bc5c9577-qmqkb" [17fb87a0-6829-48b1-8fec-653431fdffdc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 10:25:48.183608  520783 system_pods.go:61] "etcd-newest-cni-156330" [746d2e85-25b2-4bfc-a73f-0915d8ad139f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:25:48.183622  520783 system_pods.go:61] "kindnet-pbbpw" [91c0b846-d32c-4a34-b86e-0a70463acf97] Running
	I1129 10:25:48.183630  520783 system_pods.go:61] "kube-apiserver-newest-cni-156330" [f22cc283-0f55-4963-b408-f0e6369fe13d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:25:48.183640  520783 system_pods.go:61] "kube-controller-manager-newest-cni-156330" [48ed13b6-74ed-453d-b487-840731d8497f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:25:48.183646  520783 system_pods.go:61] "kube-proxy-7k5nl" [5066bedf-aec0-4cb1-b9da-7073ad77a358] Running
	I1129 10:25:48.183673  520783 system_pods.go:61] "kube-scheduler-newest-cni-156330" [0fece855-29cc-4724-a24a-eba2d26500e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:25:48.183679  520783 system_pods.go:61] "storage-provisioner" [5a6c22d4-57aa-45cd-9972-b81a1c2998a4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 10:25:48.183699  520783 system_pods.go:74] duration metric: took 4.701988ms to wait for pod list to return data ...
	I1129 10:25:48.183714  520783 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:25:48.190356  520783 default_sa.go:45] found service account: "default"
	I1129 10:25:48.190384  520783 default_sa.go:55] duration metric: took 6.663119ms for default service account to be created ...
	I1129 10:25:48.190398  520783 kubeadm.go:587] duration metric: took 6.774490446s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 10:25:48.190443  520783 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:25:48.193234  520783 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:25:48.193271  520783 node_conditions.go:123] node cpu capacity is 2
	I1129 10:25:48.193285  520783 node_conditions.go:105] duration metric: took 2.835562ms to run NodePressure ...
	I1129 10:25:48.193334  520783 start.go:242] waiting for startup goroutines ...
	I1129 10:25:48.193345  520783 start.go:247] waiting for cluster config update ...
	I1129 10:25:48.193362  520783 start.go:256] writing updated cluster config ...
	I1129 10:25:48.193709  520783 ssh_runner.go:195] Run: rm -f paused
	I1129 10:25:48.298631  520783 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:25:48.302596  520783 out.go:179] * Done! kubectl is now configured to use "newest-cni-156330" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:25:46 newest-cni-156330 crio[617]: time="2025-11-29T10:25:46.89096266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:46 newest-cni-156330 crio[617]: time="2025-11-29T10:25:46.895607409Z" level=info msg="Running pod sandbox: kube-system/kindnet-pbbpw/POD" id=8bfff56e-ce2a-4674-86c8-e72c7b969c08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:46 newest-cni-156330 crio[617]: time="2025-11-29T10:25:46.895668817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.011192541Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8bfff56e-ce2a-4674-86c8-e72c7b969c08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.014510398Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b4f86084-b8df-4f32-a307-76f06a34a031 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.029511307Z" level=info msg="Ran pod sandbox 582de72a4cb1d042e758bbf331179b4eba6c12a4d3ca3bc92d47fdff0d1d1553 with infra container: kube-system/kindnet-pbbpw/POD" id=8bfff56e-ce2a-4674-86c8-e72c7b969c08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.051495175Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=82ecf48a-8ba8-4db4-aa65-815b5b170ea7 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.057716066Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=db43833e-2808-47ec-9eb5-e1ac70d6afea name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.061671116Z" level=info msg="Creating container: kube-system/kindnet-pbbpw/kindnet-cni" id=91fbb772-9e74-4667-8b01-c60ac0a27830 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.061861206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.091587854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.093668945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.107381774Z" level=info msg="Ran pod sandbox f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c with infra container: kube-system/kube-proxy-7k5nl/POD" id=b4f86084-b8df-4f32-a307-76f06a34a031 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.119453934Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3a5a2981-dd4f-4897-9299-0dafb44fe666 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.121250804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7b5292b5-e675-40e5-9cfb-54654e2bbadc name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.131171563Z" level=info msg="Creating container: kube-system/kube-proxy-7k5nl/kube-proxy" id=dc927a32-8d9a-4c87-9ecc-849574547b0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.131282703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.164464054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.164986965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.256107296Z" level=info msg="Created container 327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e: kube-system/kindnet-pbbpw/kindnet-cni" id=91fbb772-9e74-4667-8b01-c60ac0a27830 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.258774528Z" level=info msg="Starting container: 327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e" id=efb9daf7-69f1-4942-8704-3df1969f9ce1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.267343295Z" level=info msg="Started container" PID=1069 containerID=327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e description=kube-system/kindnet-pbbpw/kindnet-cni id=efb9daf7-69f1-4942-8704-3df1969f9ce1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=582de72a4cb1d042e758bbf331179b4eba6c12a4d3ca3bc92d47fdff0d1d1553
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.295808758Z" level=info msg="Created container 20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199: kube-system/kube-proxy-7k5nl/kube-proxy" id=dc927a32-8d9a-4c87-9ecc-849574547b0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.300587572Z" level=info msg="Starting container: 20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199" id=5eb589f4-2b9a-4a60-a826-2cb7e072eaa4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.307397065Z" level=info msg="Started container" PID=1071 containerID=20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199 description=kube-system/kube-proxy-7k5nl/kube-proxy id=5eb589f4-2b9a-4a60-a826-2cb7e072eaa4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20aed5aaece95       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   f332b681f3cbb       kube-proxy-7k5nl                            kube-system
	327163c45989a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   582de72a4cb1d       kindnet-pbbpw                               kube-system
	3069c57ee217c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   10d59ebe39c6c       kube-controller-manager-newest-cni-156330   kube-system
	7181c5b13ec16       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   930a6a6aa34ee       etcd-newest-cni-156330                      kube-system
	7ca537fbb625a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   0e82fce72d204       kube-scheduler-newest-cni-156330            kube-system
	627ffe7d66e14       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   72db8450609de       kube-apiserver-newest-cni-156330            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-156330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-156330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=newest-cni-156330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_25_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:25:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-156330
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:25:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-156330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                18f776bf-837e-4512-96d1-eca8626890e6
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-156330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-pbbpw                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-156330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-156330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-7k5nl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-156330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-156330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-156330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-156330 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-156330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-156330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-156330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-156330 event: Registered Node newest-cni-156330 in Controller
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-156330 event: Registered Node newest-cni-156330 in Controller
	
	
	==> dmesg <==
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	[Nov29 10:25] overlayfs: idmapped layers are currently not supported
	[  +6.600462] overlayfs: idmapped layers are currently not supported
	[ +33.077974] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7181c5b13ec162fb7288badb5924c819c3d78742e99f14f89d596e89d4079270] <==
	{"level":"warn","ts":"2025-11-29T10:25:43.922441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:43.946427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:43.960210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:43.998565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.012915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.026859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.045955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.064294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.083957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.108394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.122123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.167290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.183599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.186314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.208942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.231586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.254376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.277065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.301983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.314646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.344609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.368593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.405542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.414776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.548909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58504","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:25:52 up  3:08,  0 user,  load average: 6.46, 4.55, 3.19
	Linux newest-cni-156330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e] <==
	I1129 10:25:47.419207       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:25:47.419448       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:25:47.419547       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:25:47.419558       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:25:47.419571       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:25:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:25:47.617260       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:25:47.617354       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:25:47.622269       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:25:47.623170       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [627ffe7d66e1492e6acca7d037b758cd87d7d478036bd0f90d38c255209293ec] <==
	I1129 10:25:45.967489       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 10:25:45.967525       1 policy_source.go:240] refreshing policies
	I1129 10:25:45.978490       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 10:25:45.978515       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 10:25:45.988422       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:25:45.989142       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:25:46.019331       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:25:46.026181       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:25:46.039523       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 10:25:46.039595       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:25:46.049750       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 10:25:46.049873       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 10:25:46.049907       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1129 10:25:46.065572       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:25:46.708630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:25:46.790694       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:25:47.272862       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:25:47.558654       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:25:47.688781       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:25:47.747077       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:25:47.906018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.209.215"}
	I1129 10:25:47.965443       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.108.151"}
	I1129 10:25:49.868467       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:25:49.918867       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:25:49.993176       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3069c57ee217c95d03dec233c09e477ed666d0956cde1ef28760ffbbce286d95] <==
	I1129 10:25:49.492495       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:25:49.504621       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:49.504712       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:25:49.504743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:25:49.513945       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:49.514065       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:25:49.514701       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 10:25:49.514804       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:25:49.515904       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:25:49.516759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:25:49.516963       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 10:25:49.516975       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 10:25:49.517006       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:25:49.524273       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:25:49.526293       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 10:25:49.526360       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:25:49.530628       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:25:49.534210       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:25:49.540320       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:25:49.548178       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:25:49.560230       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 10:25:49.560296       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 10:25:49.560319       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 10:25:49.560324       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 10:25:49.560330       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-proxy [20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199] <==
	I1129 10:25:48.034448       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:25:48.339815       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:25:48.458364       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:25:48.458431       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:25:48.458502       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:25:48.564804       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:25:48.564935       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:25:48.573370       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:25:48.573900       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:25:48.574204       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:48.575608       1 config.go:200] "Starting service config controller"
	I1129 10:25:48.575680       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:25:48.575724       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:25:48.575750       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:25:48.575785       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:25:48.575811       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:25:48.581264       1 config.go:309] "Starting node config controller"
	I1129 10:25:48.582318       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:25:48.582389       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:25:48.682156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:25:48.691290       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:25:48.698194       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7ca537fbb625a78664aa04f54caa6ecd30cebe66333e5ca9a85bd03f1ba23c61] <==
	I1129 10:25:48.619162       1 serving.go:386] Generated self-signed cert in-memory
	I1129 10:25:49.221527       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:25:49.221564       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:49.235541       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:25:49.235767       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 10:25:49.235814       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 10:25:49.235891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:25:49.236695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:49.236761       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:49.237001       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:25:49.237045       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:25:49.336093       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1129 10:25:49.341480       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:25:49.341677       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:25:45 newest-cni-156330 kubelet[737]: E1129 10:25:45.646608     737 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-156330\" not found" node="newest-cni-156330"
	Nov 29 10:25:45 newest-cni-156330 kubelet[737]: I1129 10:25:45.785100     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.079888     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-156330\" already exists" pod="kube-system/kube-controller-manager-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.079926     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.099278     737 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.099394     737 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.099426     737 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.104999     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-156330\" already exists" pod="kube-system/kube-scheduler-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.105049     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.105481     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.126389     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-156330\" already exists" pod="kube-system/etcd-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.126424     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.152562     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-156330\" already exists" pod="kube-system/kube-apiserver-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.580192     737 apiserver.go:52] "Watching apiserver"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.675514     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-cni-cfg\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767284     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5066bedf-aec0-4cb1-b9da-7073ad77a358-xtables-lock\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767305     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-xtables-lock\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767333     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5066bedf-aec0-4cb1-b9da-7073ad77a358-lib-modules\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767351     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-lib-modules\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.827598     737 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 10:25:47 newest-cni-156330 kubelet[737]: W1129 10:25:47.094638     737 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/crio-f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c WatchSource:0}: Error finding container f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c: Status 404 returned error can't find the container with id f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c
	Nov 29 10:25:49 newest-cni-156330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:25:49 newest-cni-156330 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:25:49 newest-cni-156330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-156330 -n newest-cni-156330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-156330 -n newest-cni-156330: exit status 2 (353.231938ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-156330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv: exit status 1 (82.480803ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qmqkb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5ngmz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-74sjv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-156330
helpers_test.go:243: (dbg) docker inspect newest-cni-156330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275",
	        "Created": "2025-11-29T10:24:48.208994014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 520910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:25:33.760702241Z",
	            "FinishedAt": "2025-11-29T10:25:32.942412191Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/hostname",
	        "HostsPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/hosts",
	        "LogPath": "/var/lib/docker/containers/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275-json.log",
	        "Name": "/newest-cni-156330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-156330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-156330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275",
	                "LowerDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d6f4a1d428b6c7e4031caa3fd58ae982382761b6507b9a134cc2df97ae6444d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-156330",
	                "Source": "/var/lib/docker/volumes/newest-cni-156330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-156330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-156330",
	                "name.minikube.sigs.k8s.io": "newest-cni-156330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d01ae1b086574f4cc9925758ff574b20eac96488bbe18dd3b136a457fc1a2cc6",
	            "SandboxKey": "/var/run/docker/netns/d01ae1b08657",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-156330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:51:64:3c:9a:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "296cf76a04b7032c7fa82b79716bf37121a065fecc07315bcd2905590381d495",
	                    "EndpointID": "d50a0e6da3cf3e7af88a434e1798181f8b3905fc9ebcf4cbd16678ab5b650c0b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-156330",
	                        "3766eb449434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330: exit status 2 (374.803682ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-156330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-156330 logs -n 25: (1.0872845s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:21 UTC │ 29 Nov 25 10:22 UTC │
	│ image   │ embed-certs-708011 image list --format=json                                                                                                                                                                                                   │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-708011 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │                     │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:22 UTC │ 29 Nov 25 10:23 UTC │
	│ delete  │ -p embed-certs-708011                                                                                                                                                                                                                         │ embed-certs-708011           │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                                                                                                    │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ stop    │ -p default-k8s-diff-port-194354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-194354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ stop    │ -p newest-cni-156330 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-156330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ image   │ newest-cni-156330 image list --format=json                                                                                                                                                                                                    │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ pause   │ -p newest-cni-156330 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:25:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:25:33.483205  520783 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:25:33.483583  520783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:33.483631  520783 out.go:374] Setting ErrFile to fd 2...
	I1129 10:25:33.483653  520783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:33.484035  520783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:25:33.484531  520783 out.go:368] Setting JSON to false
	I1129 10:25:33.485554  520783 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11283,"bootTime":1764400651,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:25:33.485686  520783 start.go:143] virtualization:  
	I1129 10:25:33.488918  520783 out.go:179] * [newest-cni-156330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:25:33.492830  520783 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:25:33.492941  520783 notify.go:221] Checking for updates...
	I1129 10:25:33.500888  520783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:25:33.504957  520783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:33.507822  520783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:25:33.510751  520783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:25:33.513658  520783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:25:33.516942  520783 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:33.517574  520783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:25:33.551932  520783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:25:33.552052  520783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:33.616915  520783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:33.606626781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:33.617021  520783 docker.go:319] overlay module found
	I1129 10:25:33.620239  520783 out.go:179] * Using the docker driver based on existing profile
	I1129 10:25:33.623172  520783 start.go:309] selected driver: docker
	I1129 10:25:33.623195  520783 start.go:927] validating driver "docker" against &{Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:25:33.623311  520783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:25:33.624057  520783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:33.678713  520783 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:33.669822783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:33.679079  520783 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 10:25:33.679114  520783 cni.go:84] Creating CNI manager for ""
	I1129 10:25:33.679178  520783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:33.679218  520783 start.go:353] cluster config:
	{Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:25:33.682378  520783 out.go:179] * Starting "newest-cni-156330" primary control-plane node in "newest-cni-156330" cluster
	I1129 10:25:33.685163  520783 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:25:33.688064  520783 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:25:33.690959  520783 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:33.691038  520783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:25:33.691050  520783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:25:33.691069  520783 cache.go:65] Caching tarball of preloaded images
	I1129 10:25:33.691186  520783 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:25:33.691196  520783 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:25:33.691349  520783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/config.json ...
	I1129 10:25:33.710490  520783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:25:33.710514  520783 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:25:33.710528  520783 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:25:33.710559  520783 start.go:360] acquireMachinesLock for newest-cni-156330: {Name:mk0b8f68121a1d050dcf1381cb60e275f8c46ccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:25:33.710618  520783 start.go:364] duration metric: took 39.139µs to acquireMachinesLock for "newest-cni-156330"
	I1129 10:25:33.710643  520783 start.go:96] Skipping create...Using existing machine configuration
	I1129 10:25:33.710649  520783 fix.go:54] fixHost starting: 
	I1129 10:25:33.710925  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:33.727465  520783 fix.go:112] recreateIfNeeded on newest-cni-156330: state=Stopped err=<nil>
	W1129 10:25:33.727510  520783 fix.go:138] unexpected machine state, will restart: <nil>
	W1129 10:25:34.414381  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	W1129 10:25:36.914501  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:33.730676  520783 out.go:252] * Restarting existing docker container for "newest-cni-156330" ...
	I1129 10:25:33.730758  520783 cli_runner.go:164] Run: docker start newest-cni-156330
	I1129 10:25:33.982936  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:34.010005  520783 kic.go:430] container "newest-cni-156330" state is running.
	I1129 10:25:34.010506  520783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:25:34.029606  520783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/config.json ...
	I1129 10:25:34.029840  520783 machine.go:94] provisionDockerMachine start ...
	I1129 10:25:34.029904  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:34.049369  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:34.049854  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:34.049909  520783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 10:25:34.050741  520783 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47338->127.0.0.1:33466: read: connection reset by peer
	I1129 10:25:37.202620  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-156330
	
	I1129 10:25:37.202646  520783 ubuntu.go:182] provisioning hostname "newest-cni-156330"
	I1129 10:25:37.202711  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.220848  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:37.221166  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:37.221182  520783 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-156330 && echo "newest-cni-156330" | sudo tee /etc/hostname
	I1129 10:25:37.383805  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-156330
	
	I1129 10:25:37.383884  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.402428  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:37.402750  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:37.402771  520783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-156330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-156330/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-156330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 10:25:37.558676  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 10:25:37.558704  520783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-300311/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-300311/.minikube}
	I1129 10:25:37.558735  520783 ubuntu.go:190] setting up certificates
	I1129 10:25:37.558745  520783 provision.go:84] configureAuth start
	I1129 10:25:37.558839  520783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:25:37.580706  520783 provision.go:143] copyHostCerts
	I1129 10:25:37.580784  520783 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem, removing ...
	I1129 10:25:37.580802  520783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem
	I1129 10:25:37.580883  520783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/ca.pem (1082 bytes)
	I1129 10:25:37.580993  520783 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem, removing ...
	I1129 10:25:37.581005  520783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem
	I1129 10:25:37.581035  520783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/cert.pem (1123 bytes)
	I1129 10:25:37.581111  520783 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem, removing ...
	I1129 10:25:37.581120  520783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem
	I1129 10:25:37.581147  520783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-300311/.minikube/key.pem (1679 bytes)
	I1129 10:25:37.581207  520783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem org=jenkins.newest-cni-156330 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-156330]
	I1129 10:25:37.790579  520783 provision.go:177] copyRemoteCerts
	I1129 10:25:37.790654  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 10:25:37.790695  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.811513  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:37.919783  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 10:25:37.938941  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 10:25:37.958315  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 10:25:37.977177  520783 provision.go:87] duration metric: took 418.401144ms to configureAuth
	I1129 10:25:37.977211  520783 ubuntu.go:206] setting minikube options for container-runtime
	I1129 10:25:37.977417  520783 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:37.977529  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:37.996004  520783 main.go:143] libmachine: Using SSH client type: native
	I1129 10:25:37.996325  520783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1129 10:25:37.996339  520783 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 10:25:38.347426  520783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 10:25:38.347454  520783 machine.go:97] duration metric: took 4.31760058s to provisionDockerMachine
	I1129 10:25:38.347467  520783 start.go:293] postStartSetup for "newest-cni-156330" (driver="docker")
	I1129 10:25:38.347478  520783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 10:25:38.347548  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 10:25:38.347594  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.368597  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.473826  520783 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 10:25:38.477470  520783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 10:25:38.477504  520783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 10:25:38.477517  520783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/addons for local assets ...
	I1129 10:25:38.477574  520783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-300311/.minikube/files for local assets ...
	I1129 10:25:38.477668  520783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem -> 3021822.pem in /etc/ssl/certs
	I1129 10:25:38.477785  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 10:25:38.486187  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:25:38.505500  520783 start.go:296] duration metric: took 158.018326ms for postStartSetup
	I1129 10:25:38.505584  520783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 10:25:38.505630  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.522522  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.623173  520783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 10:25:38.627993  520783 fix.go:56] duration metric: took 4.91733716s for fixHost
	I1129 10:25:38.628021  520783 start.go:83] releasing machines lock for "newest-cni-156330", held for 4.917387459s
	I1129 10:25:38.628093  520783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-156330
	I1129 10:25:38.645921  520783 ssh_runner.go:195] Run: cat /version.json
	I1129 10:25:38.645972  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.646395  520783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 10:25:38.646465  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:38.667514  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.675692  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:38.769811  520783 ssh_runner.go:195] Run: systemctl --version
	I1129 10:25:38.858821  520783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 10:25:38.900675  520783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 10:25:38.905662  520783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 10:25:38.905766  520783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 10:25:38.915722  520783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 10:25:38.915775  520783 start.go:496] detecting cgroup driver to use...
	I1129 10:25:38.915830  520783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1129 10:25:38.915915  520783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 10:25:38.931285  520783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 10:25:38.944454  520783 docker.go:218] disabling cri-docker service (if available) ...
	I1129 10:25:38.944525  520783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 10:25:38.959999  520783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 10:25:38.973085  520783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 10:25:39.117045  520783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 10:25:39.235180  520783 docker.go:234] disabling docker service ...
	I1129 10:25:39.235251  520783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 10:25:39.250302  520783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 10:25:39.263825  520783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 10:25:39.378495  520783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 10:25:39.513169  520783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 10:25:39.527265  520783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 10:25:39.544003  520783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 10:25:39.544116  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.553026  520783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 10:25:39.553116  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.562470  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.571696  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.581395  520783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 10:25:39.590422  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.599869  520783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.608384  520783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 10:25:39.616823  520783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 10:25:39.624346  520783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 10:25:39.631767  520783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:39.740182  520783 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 10:25:39.906224  520783 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 10:25:39.906306  520783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 10:25:39.914550  520783 start.go:564] Will wait 60s for crictl version
	I1129 10:25:39.914800  520783 ssh_runner.go:195] Run: which crictl
	I1129 10:25:39.918896  520783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 10:25:39.944208  520783 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 10:25:39.944295  520783 ssh_runner.go:195] Run: crio --version
	I1129 10:25:39.973633  520783 ssh_runner.go:195] Run: crio --version
	I1129 10:25:40.018265  520783 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 10:25:40.021305  520783 cli_runner.go:164] Run: docker network inspect newest-cni-156330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:25:40.044128  520783 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 10:25:40.048584  520783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:25:40.062906  520783 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1129 10:25:40.065847  520783 kubeadm.go:884] updating cluster {Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 10:25:40.066003  520783 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:40.066091  520783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:25:40.107358  520783 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:25:40.107388  520783 crio.go:433] Images already preloaded, skipping extraction
	I1129 10:25:40.107454  520783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 10:25:40.134833  520783 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 10:25:40.134859  520783 cache_images.go:86] Images are preloaded, skipping loading
	I1129 10:25:40.134869  520783 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 10:25:40.134974  520783 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-156330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 10:25:40.135059  520783 ssh_runner.go:195] Run: crio config
	I1129 10:25:40.214880  520783 cni.go:84] Creating CNI manager for ""
	I1129 10:25:40.214950  520783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:40.214974  520783 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 10:25:40.215000  520783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-156330 NodeName:newest-cni-156330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 10:25:40.215131  520783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-156330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 10:25:40.215209  520783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 10:25:40.224557  520783 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 10:25:40.224711  520783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 10:25:40.233467  520783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 10:25:40.247553  520783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 10:25:40.261447  520783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1129 10:25:40.273894  520783 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 10:25:40.277743  520783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 10:25:40.287578  520783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:40.418326  520783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:25:40.441072  520783 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330 for IP: 192.168.76.2
	I1129 10:25:40.441097  520783 certs.go:195] generating shared ca certs ...
	I1129 10:25:40.441116  520783 certs.go:227] acquiring lock for ca certs: {Name:mk599e623be34102f683a76f5fc4ed08054e9df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:40.441268  520783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key
	I1129 10:25:40.441326  520783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key
	I1129 10:25:40.441339  520783 certs.go:257] generating profile certs ...
	I1129 10:25:40.441437  520783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/client.key
	I1129 10:25:40.441513  520783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key.fb07df16
	I1129 10:25:40.441559  520783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key
	I1129 10:25:40.441689  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem (1338 bytes)
	W1129 10:25:40.441733  520783 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182_empty.pem, impossibly tiny 0 bytes
	I1129 10:25:40.441748  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 10:25:40.441778  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem (1082 bytes)
	I1129 10:25:40.441812  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem (1123 bytes)
	I1129 10:25:40.441844  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/certs/key.pem (1679 bytes)
	I1129 10:25:40.441894  520783 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem (1708 bytes)
	I1129 10:25:40.442583  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 10:25:40.465528  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 10:25:40.492229  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 10:25:40.519917  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 10:25:40.542602  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 10:25:40.565150  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 10:25:40.595023  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 10:25:40.620722  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/newest-cni-156330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 10:25:40.642181  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 10:25:40.666954  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/certs/302182.pem --> /usr/share/ca-certificates/302182.pem (1338 bytes)
	I1129 10:25:40.687508  520783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/ssl/certs/3021822.pem --> /usr/share/ca-certificates/3021822.pem (1708 bytes)
	I1129 10:25:40.708381  520783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 10:25:40.723068  520783 ssh_runner.go:195] Run: openssl version
	I1129 10:25:40.729600  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3021822.pem && ln -fs /usr/share/ca-certificates/3021822.pem /etc/ssl/certs/3021822.pem"
	I1129 10:25:40.738061  520783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3021822.pem
	I1129 10:25:40.741926  520783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 09:22 /usr/share/ca-certificates/3021822.pem
	I1129 10:25:40.742019  520783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3021822.pem
	I1129 10:25:40.785211  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3021822.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 10:25:40.795090  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 10:25:40.804437  520783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:40.808277  520783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:40.808384  520783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 10:25:40.850407  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 10:25:40.858554  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302182.pem && ln -fs /usr/share/ca-certificates/302182.pem /etc/ssl/certs/302182.pem"
	I1129 10:25:40.867207  520783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302182.pem
	I1129 10:25:40.873779  520783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 09:22 /usr/share/ca-certificates/302182.pem
	I1129 10:25:40.873855  520783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302182.pem
	I1129 10:25:40.916768  520783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/302182.pem /etc/ssl/certs/51391683.0"
	I1129 10:25:40.925004  520783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 10:25:40.929215  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 10:25:40.970773  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 10:25:41.011635  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 10:25:41.052702  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 10:25:41.123188  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 10:25:41.179116  520783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 10:25:41.269345  520783 kubeadm.go:401] StartCluster: {Name:newest-cni-156330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-156330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 10:25:41.269437  520783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 10:25:41.269498  520783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 10:25:41.336436  520783 cri.go:89] found id: "3069c57ee217c95d03dec233c09e477ed666d0956cde1ef28760ffbbce286d95"
	I1129 10:25:41.336507  520783 cri.go:89] found id: "7181c5b13ec162fb7288badb5924c819c3d78742e99f14f89d596e89d4079270"
	I1129 10:25:41.336527  520783 cri.go:89] found id: "7ca537fbb625a78664aa04f54caa6ecd30cebe66333e5ca9a85bd03f1ba23c61"
	I1129 10:25:41.336547  520783 cri.go:89] found id: "627ffe7d66e1492e6acca7d037b758cd87d7d478036bd0f90d38c255209293ec"
	I1129 10:25:41.336599  520783 cri.go:89] found id: ""
	I1129 10:25:41.336696  520783 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 10:25:41.353351  520783 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:25:41Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:25:41.353481  520783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 10:25:41.368416  520783 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 10:25:41.368496  520783 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 10:25:41.368580  520783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 10:25:41.386362  520783 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 10:25:41.387001  520783 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-156330" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:41.387351  520783 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-300311/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-156330" cluster setting kubeconfig missing "newest-cni-156330" context setting]
	I1129 10:25:41.387944  520783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:41.389765  520783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 10:25:41.414349  520783 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 10:25:41.414428  520783 kubeadm.go:602] duration metric: took 45.91336ms to restartPrimaryControlPlane
	I1129 10:25:41.414453  520783 kubeadm.go:403] duration metric: took 145.118759ms to StartCluster
	I1129 10:25:41.414499  520783 settings.go:142] acquiring lock: {Name:mk6a3cabaee3e94f4bea8489c7c48d020021f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:41.414588  520783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:41.415560  520783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/kubeconfig: {Name:mk6dd2421d886200989314e757f60300d41edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:41.415848  520783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:25:41.416273  520783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 10:25:41.416361  520783 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-156330"
	I1129 10:25:41.416376  520783 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-156330"
	W1129 10:25:41.416383  520783 addons.go:248] addon storage-provisioner should already be in state true
	I1129 10:25:41.416405  520783 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:41.417100  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.417508  520783 config.go:182] Loaded profile config "newest-cni-156330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:41.417596  520783 addons.go:70] Setting dashboard=true in profile "newest-cni-156330"
	I1129 10:25:41.417628  520783 addons.go:239] Setting addon dashboard=true in "newest-cni-156330"
	W1129 10:25:41.417658  520783 addons.go:248] addon dashboard should already be in state true
	I1129 10:25:41.417705  520783 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:41.418282  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.425668  520783 out.go:179] * Verifying Kubernetes components...
	I1129 10:25:41.425932  520783 addons.go:70] Setting default-storageclass=true in profile "newest-cni-156330"
	I1129 10:25:41.425951  520783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-156330"
	I1129 10:25:41.426390  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.429930  520783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 10:25:41.475428  520783 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 10:25:41.475516  520783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 10:25:41.479430  520783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:41.479455  520783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 10:25:41.479524  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:41.484773  520783 addons.go:239] Setting addon default-storageclass=true in "newest-cni-156330"
	W1129 10:25:41.484796  520783 addons.go:248] addon default-storageclass should already be in state true
	I1129 10:25:41.484821  520783 host.go:66] Checking if "newest-cni-156330" exists ...
	I1129 10:25:41.485016  520783 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1129 10:25:38.915065  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	W1129 10:25:40.915459  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:41.485256  520783 cli_runner.go:164] Run: docker container inspect newest-cni-156330 --format={{.State.Status}}
	I1129 10:25:41.493190  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 10:25:41.493227  520783 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 10:25:41.493297  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:41.522275  520783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:41.522299  520783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 10:25:41.522372  520783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-156330
	I1129 10:25:41.547766  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:41.567503  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:41.569956  520783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/newest-cni-156330/id_rsa Username:docker}
	I1129 10:25:41.774140  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 10:25:41.774179  520783 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 10:25:41.785495  520783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 10:25:41.819728  520783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 10:25:41.828913  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 10:25:41.828941  520783 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 10:25:41.848560  520783 api_server.go:52] waiting for apiserver process to appear ...
	I1129 10:25:41.848646  520783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 10:25:41.891157  520783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 10:25:41.903236  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 10:25:41.903264  520783 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 10:25:41.973868  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 10:25:41.973893  520783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 10:25:42.039424  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 10:25:42.039452  520783 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 10:25:42.115754  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 10:25:42.115783  520783 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 10:25:42.162711  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 10:25:42.162742  520783 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 10:25:42.197631  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 10:25:42.197676  520783 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 10:25:42.229458  520783 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 10:25:42.229489  520783 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 10:25:42.261569  520783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1129 10:25:43.414781  516700 pod_ready.go:104] pod "coredns-66bc5c9577-8rvzs" is not "Ready", error: <nil>
	I1129 10:25:45.418677  516700 pod_ready.go:94] pod "coredns-66bc5c9577-8rvzs" is "Ready"
	I1129 10:25:45.418707  516700 pod_ready.go:86] duration metric: took 31.010809135s for pod "coredns-66bc5c9577-8rvzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.434808  516700 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.448724  516700 pod_ready.go:94] pod "etcd-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:45.448749  516700 pod_ready.go:86] duration metric: took 13.913233ms for pod "etcd-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.529798  516700 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.536477  516700 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:45.536510  516700 pod_ready.go:86] duration metric: took 6.688629ms for pod "kube-apiserver-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.540499  516700 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.611836  516700 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:45.611861  516700 pod_ready.go:86] duration metric: took 71.338335ms for pod "kube-controller-manager-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:45.811252  516700 pod_ready.go:83] waiting for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.211857  516700 pod_ready.go:94] pod "kube-proxy-68szw" is "Ready"
	I1129 10:25:46.211879  516700 pod_ready.go:86] duration metric: took 400.603709ms for pod "kube-proxy-68szw" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.410921  516700 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.811289  516700 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-194354" is "Ready"
	I1129 10:25:46.811313  516700 pod_ready.go:86] duration metric: took 400.370524ms for pod "kube-scheduler-default-k8s-diff-port-194354" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 10:25:46.811327  516700 pod_ready.go:40] duration metric: took 32.418940923s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 10:25:46.910969  516700 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:25:46.914845  516700 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-194354" cluster and "default" namespace by default
	I1129 10:25:46.166209  520783 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.317535021s)
	I1129 10:25:46.166246  520783 api_server.go:72] duration metric: took 4.750339473s to wait for apiserver process to appear ...
	I1129 10:25:46.166252  520783 api_server.go:88] waiting for apiserver healthz status ...
	I1129 10:25:46.166275  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:46.167052  520783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.347291093s)
	I1129 10:25:46.208206  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:46.208251  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:46.666547  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:46.707704  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:46.707739  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:47.166494  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:47.199790  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:47.199836  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:47.667268  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:47.708336  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 10:25:47.708364  520783 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 10:25:47.823521  520783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.932326168s)
	I1129 10:25:47.979461  520783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.717842144s)
	I1129 10:25:47.982802  520783 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-156330 addons enable metrics-server
	
	I1129 10:25:47.985849  520783 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1129 10:25:47.988738  520783 addons.go:530] duration metric: took 6.572462569s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1129 10:25:48.166717  520783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 10:25:48.177776  520783 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 10:25:48.178945  520783 api_server.go:141] control plane version: v1.34.1
	I1129 10:25:48.178980  520783 api_server.go:131] duration metric: took 2.012720035s to wait for apiserver health ...
	I1129 10:25:48.178990  520783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 10:25:48.183513  520783 system_pods.go:59] 8 kube-system pods found
	I1129 10:25:48.183559  520783 system_pods.go:61] "coredns-66bc5c9577-qmqkb" [17fb87a0-6829-48b1-8fec-653431fdffdc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 10:25:48.183608  520783 system_pods.go:61] "etcd-newest-cni-156330" [746d2e85-25b2-4bfc-a73f-0915d8ad139f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 10:25:48.183622  520783 system_pods.go:61] "kindnet-pbbpw" [91c0b846-d32c-4a34-b86e-0a70463acf97] Running
	I1129 10:25:48.183630  520783 system_pods.go:61] "kube-apiserver-newest-cni-156330" [f22cc283-0f55-4963-b408-f0e6369fe13d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 10:25:48.183640  520783 system_pods.go:61] "kube-controller-manager-newest-cni-156330" [48ed13b6-74ed-453d-b487-840731d8497f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 10:25:48.183646  520783 system_pods.go:61] "kube-proxy-7k5nl" [5066bedf-aec0-4cb1-b9da-7073ad77a358] Running
	I1129 10:25:48.183673  520783 system_pods.go:61] "kube-scheduler-newest-cni-156330" [0fece855-29cc-4724-a24a-eba2d26500e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 10:25:48.183679  520783 system_pods.go:61] "storage-provisioner" [5a6c22d4-57aa-45cd-9972-b81a1c2998a4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 10:25:48.183699  520783 system_pods.go:74] duration metric: took 4.701988ms to wait for pod list to return data ...
	I1129 10:25:48.183714  520783 default_sa.go:34] waiting for default service account to be created ...
	I1129 10:25:48.190356  520783 default_sa.go:45] found service account: "default"
	I1129 10:25:48.190384  520783 default_sa.go:55] duration metric: took 6.663119ms for default service account to be created ...
	I1129 10:25:48.190398  520783 kubeadm.go:587] duration metric: took 6.774490446s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 10:25:48.190443  520783 node_conditions.go:102] verifying NodePressure condition ...
	I1129 10:25:48.193234  520783 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1129 10:25:48.193271  520783 node_conditions.go:123] node cpu capacity is 2
	I1129 10:25:48.193285  520783 node_conditions.go:105] duration metric: took 2.835562ms to run NodePressure ...
	I1129 10:25:48.193334  520783 start.go:242] waiting for startup goroutines ...
	I1129 10:25:48.193345  520783 start.go:247] waiting for cluster config update ...
	I1129 10:25:48.193362  520783 start.go:256] writing updated cluster config ...
	I1129 10:25:48.193709  520783 ssh_runner.go:195] Run: rm -f paused
	I1129 10:25:48.298631  520783 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1129 10:25:48.302596  520783 out.go:179] * Done! kubectl is now configured to use "newest-cni-156330" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 10:25:46 newest-cni-156330 crio[617]: time="2025-11-29T10:25:46.89096266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:46 newest-cni-156330 crio[617]: time="2025-11-29T10:25:46.895607409Z" level=info msg="Running pod sandbox: kube-system/kindnet-pbbpw/POD" id=8bfff56e-ce2a-4674-86c8-e72c7b969c08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:46 newest-cni-156330 crio[617]: time="2025-11-29T10:25:46.895668817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.011192541Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8bfff56e-ce2a-4674-86c8-e72c7b969c08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.014510398Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b4f86084-b8df-4f32-a307-76f06a34a031 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.029511307Z" level=info msg="Ran pod sandbox 582de72a4cb1d042e758bbf331179b4eba6c12a4d3ca3bc92d47fdff0d1d1553 with infra container: kube-system/kindnet-pbbpw/POD" id=8bfff56e-ce2a-4674-86c8-e72c7b969c08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.051495175Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=82ecf48a-8ba8-4db4-aa65-815b5b170ea7 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.057716066Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=db43833e-2808-47ec-9eb5-e1ac70d6afea name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.061671116Z" level=info msg="Creating container: kube-system/kindnet-pbbpw/kindnet-cni" id=91fbb772-9e74-4667-8b01-c60ac0a27830 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.061861206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.091587854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.093668945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.107381774Z" level=info msg="Ran pod sandbox f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c with infra container: kube-system/kube-proxy-7k5nl/POD" id=b4f86084-b8df-4f32-a307-76f06a34a031 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.119453934Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3a5a2981-dd4f-4897-9299-0dafb44fe666 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.121250804Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7b5292b5-e675-40e5-9cfb-54654e2bbadc name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.131171563Z" level=info msg="Creating container: kube-system/kube-proxy-7k5nl/kube-proxy" id=dc927a32-8d9a-4c87-9ecc-849574547b0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.131282703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.164464054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.164986965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.256107296Z" level=info msg="Created container 327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e: kube-system/kindnet-pbbpw/kindnet-cni" id=91fbb772-9e74-4667-8b01-c60ac0a27830 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.258774528Z" level=info msg="Starting container: 327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e" id=efb9daf7-69f1-4942-8704-3df1969f9ce1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.267343295Z" level=info msg="Started container" PID=1069 containerID=327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e description=kube-system/kindnet-pbbpw/kindnet-cni id=efb9daf7-69f1-4942-8704-3df1969f9ce1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=582de72a4cb1d042e758bbf331179b4eba6c12a4d3ca3bc92d47fdff0d1d1553
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.295808758Z" level=info msg="Created container 20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199: kube-system/kube-proxy-7k5nl/kube-proxy" id=dc927a32-8d9a-4c87-9ecc-849574547b0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.300587572Z" level=info msg="Starting container: 20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199" id=5eb589f4-2b9a-4a60-a826-2cb7e072eaa4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:47 newest-cni-156330 crio[617]: time="2025-11-29T10:25:47.307397065Z" level=info msg="Started container" PID=1071 containerID=20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199 description=kube-system/kube-proxy-7k5nl/kube-proxy id=5eb589f4-2b9a-4a60-a826-2cb7e072eaa4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20aed5aaece95       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   f332b681f3cbb       kube-proxy-7k5nl                            kube-system
	327163c45989a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   582de72a4cb1d       kindnet-pbbpw                               kube-system
	3069c57ee217c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   10d59ebe39c6c       kube-controller-manager-newest-cni-156330   kube-system
	7181c5b13ec16       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   930a6a6aa34ee       etcd-newest-cni-156330                      kube-system
	7ca537fbb625a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   0e82fce72d204       kube-scheduler-newest-cni-156330            kube-system
	627ffe7d66e14       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   72db8450609de       kube-apiserver-newest-cni-156330            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-156330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-156330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=newest-cni-156330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_25_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:25:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-156330
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:25:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 29 Nov 2025 10:25:46 +0000   Sat, 29 Nov 2025 10:25:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-156330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                18f776bf-837e-4512-96d1-eca8626890e6
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-156330                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-pbbpw                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-156330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-156330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-7k5nl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-156330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node newest-cni-156330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node newest-cni-156330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node newest-cni-156330 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-156330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-156330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-156330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-156330 event: Registered Node newest-cni-156330 in Controller
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-156330 event: Registered Node newest-cni-156330 in Controller
	
	
	==> dmesg <==
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	[Nov29 10:25] overlayfs: idmapped layers are currently not supported
	[  +6.600462] overlayfs: idmapped layers are currently not supported
	[ +33.077974] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7181c5b13ec162fb7288badb5924c819c3d78742e99f14f89d596e89d4079270] <==
	{"level":"warn","ts":"2025-11-29T10:25:43.922441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:43.946427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:43.960210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:43.998565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.012915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.026859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.045955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.064294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.083957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.108394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.122123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.167290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.183599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.186314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.208942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.231586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.254376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.277065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.301983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.314646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.344609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.368593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.405542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.414776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:44.548909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58504","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:25:54 up  3:08,  0 user,  load average: 6.46, 4.55, 3.19
	Linux newest-cni-156330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [327163c45989a57af32ca25702f81c3874d9a9dd32c52ef456f118fe7497003e] <==
	I1129 10:25:47.419207       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:25:47.419448       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 10:25:47.419547       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:25:47.419558       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:25:47.419571       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:25:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:25:47.617260       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:25:47.617354       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:25:47.622269       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:25:47.623170       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [627ffe7d66e1492e6acca7d037b758cd87d7d478036bd0f90d38c255209293ec] <==
	I1129 10:25:45.967489       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 10:25:45.967525       1 policy_source.go:240] refreshing policies
	I1129 10:25:45.978490       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 10:25:45.978515       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 10:25:45.988422       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:25:45.989142       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:25:46.019331       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:25:46.026181       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:25:46.039523       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 10:25:46.039595       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 10:25:46.049750       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 10:25:46.049873       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 10:25:46.049907       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1129 10:25:46.065572       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:25:46.708630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 10:25:46.790694       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:25:47.272862       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:25:47.558654       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:25:47.688781       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:25:47.747077       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:25:47.906018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.209.215"}
	I1129 10:25:47.965443       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.108.151"}
	I1129 10:25:49.868467       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:25:49.918867       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 10:25:49.993176       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3069c57ee217c95d03dec233c09e477ed666d0956cde1ef28760ffbbce286d95] <==
	I1129 10:25:49.492495       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 10:25:49.504621       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:49.504712       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:25:49.504743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:25:49.513945       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:49.514065       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 10:25:49.514701       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 10:25:49.514804       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:25:49.515904       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 10:25:49.516759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 10:25:49.516963       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 10:25:49.516975       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 10:25:49.517006       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:25:49.524273       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 10:25:49.526293       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 10:25:49.526360       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 10:25:49.530628       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 10:25:49.534210       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 10:25:49.540320       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:25:49.548178       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 10:25:49.560230       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 10:25:49.560296       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 10:25:49.560319       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 10:25:49.560324       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 10:25:49.560330       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-proxy [20aed5aaece956840955488b6469ca66b2fe0c3f3c0e20ef4857c2f8efe58199] <==
	I1129 10:25:48.034448       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:25:48.339815       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:25:48.458364       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:25:48.458431       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 10:25:48.458502       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:25:48.564804       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:25:48.564935       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:25:48.573370       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:25:48.573900       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:25:48.574204       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:48.575608       1 config.go:200] "Starting service config controller"
	I1129 10:25:48.575680       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:25:48.575724       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:25:48.575750       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:25:48.575785       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:25:48.575811       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:25:48.581264       1 config.go:309] "Starting node config controller"
	I1129 10:25:48.582318       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:25:48.582389       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:25:48.682156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:25:48.691290       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:25:48.698194       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7ca537fbb625a78664aa04f54caa6ecd30cebe66333e5ca9a85bd03f1ba23c61] <==
	I1129 10:25:48.619162       1 serving.go:386] Generated self-signed cert in-memory
	I1129 10:25:49.221527       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 10:25:49.221564       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:49.235541       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:25:49.235767       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 10:25:49.235814       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 10:25:49.235891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 10:25:49.236695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:49.236761       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:49.237001       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:25:49.237045       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:25:49.336093       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1129 10:25:49.341480       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 10:25:49.341677       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:25:45 newest-cni-156330 kubelet[737]: E1129 10:25:45.646608     737 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-156330\" not found" node="newest-cni-156330"
	Nov 29 10:25:45 newest-cni-156330 kubelet[737]: I1129 10:25:45.785100     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.079888     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-156330\" already exists" pod="kube-system/kube-controller-manager-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.079926     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.099278     737 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.099394     737 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.099426     737 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.104999     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-156330\" already exists" pod="kube-system/kube-scheduler-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.105049     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.105481     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.126389     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-156330\" already exists" pod="kube-system/etcd-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.126424     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: E1129 10:25:46.152562     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-156330\" already exists" pod="kube-system/kube-apiserver-newest-cni-156330"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.580192     737 apiserver.go:52] "Watching apiserver"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.675514     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-cni-cfg\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767284     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5066bedf-aec0-4cb1-b9da-7073ad77a358-xtables-lock\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767305     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-xtables-lock\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767333     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5066bedf-aec0-4cb1-b9da-7073ad77a358-lib-modules\") pod \"kube-proxy-7k5nl\" (UID: \"5066bedf-aec0-4cb1-b9da-7073ad77a358\") " pod="kube-system/kube-proxy-7k5nl"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.767351     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91c0b846-d32c-4a34-b86e-0a70463acf97-lib-modules\") pod \"kindnet-pbbpw\" (UID: \"91c0b846-d32c-4a34-b86e-0a70463acf97\") " pod="kube-system/kindnet-pbbpw"
	Nov 29 10:25:46 newest-cni-156330 kubelet[737]: I1129 10:25:46.827598     737 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 29 10:25:47 newest-cni-156330 kubelet[737]: W1129 10:25:47.094638     737 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3766eb4494340fa9dcfe438bee2b4ac3cebff28d52a114ccb82a30c242077275/crio-f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c WatchSource:0}: Error finding container f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c: Status 404 returned error can't find the container with id f332b681f3cbb5a30d9d04acc64038cd20beb8ae12e1727876c801c0ed0bc85c
	Nov 29 10:25:49 newest-cni-156330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:25:49 newest-cni-156330 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:25:49 newest-cni-156330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-156330 -n newest-cni-156330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-156330 -n newest-cni-156330: exit status 2 (373.804663ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-156330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv: exit status 1 (97.864158ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qmqkb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-5ngmz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-74sjv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-156330 describe pod coredns-66bc5c9577-qmqkb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-5ngmz kubernetes-dashboard-855c9754f9-74sjv: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-194354 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-194354 --alsologtostderr -v=1: exit status 80 (2.580372096s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-194354 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 10:25:59.889531  524340 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:25:59.889682  524340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:59.889689  524340 out.go:374] Setting ErrFile to fd 2...
	I1129 10:25:59.889694  524340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:59.889928  524340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:25:59.890209  524340 out.go:368] Setting JSON to false
	I1129 10:25:59.890227  524340 mustload.go:66] Loading cluster: default-k8s-diff-port-194354
	I1129 10:25:59.890695  524340 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:59.891166  524340 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-194354 --format={{.State.Status}}
	I1129 10:25:59.915878  524340 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:25:59.916211  524340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:26:00.051913  524340 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:69 SystemTime:2025-11-29 10:25:59.999997972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:26:00.052649  524340 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-194354 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 10:26:00.063613  524340 out.go:179] * Pausing node default-k8s-diff-port-194354 ... 
	I1129 10:26:00.077422  524340 host.go:66] Checking if "default-k8s-diff-port-194354" exists ...
	I1129 10:26:00.077842  524340 ssh_runner.go:195] Run: systemctl --version
	I1129 10:26:00.077909  524340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-194354
	I1129 10:26:00.144893  524340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/default-k8s-diff-port-194354/id_rsa Username:docker}
	I1129 10:26:00.343960  524340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:26:00.380919  524340 pause.go:52] kubelet running: true
	I1129 10:26:00.381102  524340 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:26:00.737014  524340 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:26:00.737147  524340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:26:00.833968  524340 cri.go:89] found id: "f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0"
	I1129 10:26:00.833994  524340 cri.go:89] found id: "4156a03d4471982d7f1e0ee87abbd4c23d7ad47acfd3d7c9f78e30a38f482262"
	I1129 10:26:00.834000  524340 cri.go:89] found id: "d4349b4db02dbc959452e72b01b679dd6797bd59f8e1c1f1e9ceeba80768722c"
	I1129 10:26:00.834005  524340 cri.go:89] found id: "dd920d356015c1b7811b703a9edaeae17d4fd173b3aa9e4482b4ae163c2cd1dd"
	I1129 10:26:00.834009  524340 cri.go:89] found id: "80376f1b84a8212571ab445745322c485da4dee9d893fccb971c5a4a8628bad1"
	I1129 10:26:00.834012  524340 cri.go:89] found id: "ce0fd82d0bd79a2222344eb64f283a2f997b836dc9783c79d7896af82a254d18"
	I1129 10:26:00.834016  524340 cri.go:89] found id: "780974d2b2f4b2a8795f2a71e0983d493f7a5959e65c3f800b7e7bed5c5841be"
	I1129 10:26:00.834018  524340 cri.go:89] found id: "022e047748e69418af4ebc42eb96a45df83b9d1d7f5c7d95684372ff9198d7ca"
	I1129 10:26:00.834022  524340 cri.go:89] found id: "63793328bc8752412204a8263047290b0453435f744f79a3ca344412702eda5f"
	I1129 10:26:00.834029  524340 cri.go:89] found id: "2ae519b1dac1d77107d89fadfee08d2e54f6ad3bf37e38b5789e75e9faadb05b"
	I1129 10:26:00.834032  524340 cri.go:89] found id: "71c27d6cbb9d418c72d89e546ad23a2171f1d6e4642d3ed24033fdf16a87b5d4"
	I1129 10:26:00.834035  524340 cri.go:89] found id: ""
	I1129 10:26:00.834107  524340 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:26:00.845364  524340 retry.go:31] will retry after 260.388508ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:26:00Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:26:01.106878  524340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:26:01.122137  524340 pause.go:52] kubelet running: false
	I1129 10:26:01.122264  524340 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:26:01.395271  524340 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:26:01.395416  524340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:26:01.485719  524340 cri.go:89] found id: "f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0"
	I1129 10:26:01.485745  524340 cri.go:89] found id: "4156a03d4471982d7f1e0ee87abbd4c23d7ad47acfd3d7c9f78e30a38f482262"
	I1129 10:26:01.485750  524340 cri.go:89] found id: "d4349b4db02dbc959452e72b01b679dd6797bd59f8e1c1f1e9ceeba80768722c"
	I1129 10:26:01.485754  524340 cri.go:89] found id: "dd920d356015c1b7811b703a9edaeae17d4fd173b3aa9e4482b4ae163c2cd1dd"
	I1129 10:26:01.485758  524340 cri.go:89] found id: "80376f1b84a8212571ab445745322c485da4dee9d893fccb971c5a4a8628bad1"
	I1129 10:26:01.485761  524340 cri.go:89] found id: "ce0fd82d0bd79a2222344eb64f283a2f997b836dc9783c79d7896af82a254d18"
	I1129 10:26:01.485764  524340 cri.go:89] found id: "780974d2b2f4b2a8795f2a71e0983d493f7a5959e65c3f800b7e7bed5c5841be"
	I1129 10:26:01.485790  524340 cri.go:89] found id: "022e047748e69418af4ebc42eb96a45df83b9d1d7f5c7d95684372ff9198d7ca"
	I1129 10:26:01.485801  524340 cri.go:89] found id: "63793328bc8752412204a8263047290b0453435f744f79a3ca344412702eda5f"
	I1129 10:26:01.485820  524340 cri.go:89] found id: "2ae519b1dac1d77107d89fadfee08d2e54f6ad3bf37e38b5789e75e9faadb05b"
	I1129 10:26:01.485827  524340 cri.go:89] found id: "71c27d6cbb9d418c72d89e546ad23a2171f1d6e4642d3ed24033fdf16a87b5d4"
	I1129 10:26:01.485831  524340 cri.go:89] found id: ""
	I1129 10:26:01.485896  524340 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:26:01.497216  524340 retry.go:31] will retry after 451.822074ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:26:01Z" level=error msg="open /run/runc: no such file or directory"
	I1129 10:26:01.949954  524340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 10:26:01.963627  524340 pause.go:52] kubelet running: false
	I1129 10:26:01.963725  524340 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 10:26:02.149696  524340 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 10:26:02.149782  524340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 10:26:02.222654  524340 cri.go:89] found id: "f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0"
	I1129 10:26:02.222681  524340 cri.go:89] found id: "4156a03d4471982d7f1e0ee87abbd4c23d7ad47acfd3d7c9f78e30a38f482262"
	I1129 10:26:02.222691  524340 cri.go:89] found id: "d4349b4db02dbc959452e72b01b679dd6797bd59f8e1c1f1e9ceeba80768722c"
	I1129 10:26:02.222699  524340 cri.go:89] found id: "dd920d356015c1b7811b703a9edaeae17d4fd173b3aa9e4482b4ae163c2cd1dd"
	I1129 10:26:02.222703  524340 cri.go:89] found id: "80376f1b84a8212571ab445745322c485da4dee9d893fccb971c5a4a8628bad1"
	I1129 10:26:02.222707  524340 cri.go:89] found id: "ce0fd82d0bd79a2222344eb64f283a2f997b836dc9783c79d7896af82a254d18"
	I1129 10:26:02.222710  524340 cri.go:89] found id: "780974d2b2f4b2a8795f2a71e0983d493f7a5959e65c3f800b7e7bed5c5841be"
	I1129 10:26:02.222712  524340 cri.go:89] found id: "022e047748e69418af4ebc42eb96a45df83b9d1d7f5c7d95684372ff9198d7ca"
	I1129 10:26:02.222715  524340 cri.go:89] found id: "63793328bc8752412204a8263047290b0453435f744f79a3ca344412702eda5f"
	I1129 10:26:02.222721  524340 cri.go:89] found id: "2ae519b1dac1d77107d89fadfee08d2e54f6ad3bf37e38b5789e75e9faadb05b"
	I1129 10:26:02.222725  524340 cri.go:89] found id: "71c27d6cbb9d418c72d89e546ad23a2171f1d6e4642d3ed24033fdf16a87b5d4"
	I1129 10:26:02.222728  524340 cri.go:89] found id: ""
	I1129 10:26:02.222781  524340 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 10:26:02.283061  524340 out.go:203] 
	W1129 10:26:02.312703  524340 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:26:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T10:26:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 10:26:02.312731  524340 out.go:285] * 
	* 
	W1129 10:26:02.320252  524340 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 10:26:02.377257  524340 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-194354 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-194354
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-194354:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88",
	        "Created": "2025-11-29T10:23:08.777622833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516871,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:24:52.90493071Z",
	            "FinishedAt": "2025-11-29T10:24:51.954656482Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/hostname",
	        "HostsPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/hosts",
	        "LogPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88-json.log",
	        "Name": "/default-k8s-diff-port-194354",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-194354:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-194354",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88",
	                "LowerDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-194354",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-194354/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-194354",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-194354",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-194354",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1820d94bfdf2e9ded26099965a20b40a34319f1178b06ac744357a6a1c9d6a62",
	            "SandboxKey": "/var/run/docker/netns/1820d94bfdf2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-194354": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:ac:c9:cd:0a:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57a57979b7c8de5b2d73e81501e805dfbd816f410a202f054d691d84e66ed18d",
	                    "EndpointID": "6410c302361753d64e95efbcf8d8beb5ef633e91d40f38e815b49203085ac0b9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-194354",
	                        "4c5ba5cc2474"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354: exit status 2 (357.413996ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-194354 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-194354 logs -n 25: (1.9610737s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                                                                                                    │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ stop    │ -p default-k8s-diff-port-194354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-194354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ stop    │ -p newest-cni-156330 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-156330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ image   │ newest-cni-156330 image list --format=json                                                                                                                                                                                                    │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ pause   │ -p newest-cni-156330 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ delete  │ -p newest-cni-156330                                                                                                                                                                                                                          │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ delete  │ -p newest-cni-156330                                                                                                                                                                                                                          │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ start   │ -p auto-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-151203                  │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ image   │ default-k8s-diff-port-194354 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ pause   │ -p default-k8s-diff-port-194354 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:25:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:25:57.281255  524003 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:25:57.281434  524003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:57.281445  524003 out.go:374] Setting ErrFile to fd 2...
	I1129 10:25:57.281451  524003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:57.281698  524003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:25:57.282315  524003 out.go:368] Setting JSON to false
	I1129 10:25:57.283272  524003 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11307,"bootTime":1764400651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:25:57.283342  524003 start.go:143] virtualization:  
	I1129 10:25:57.287551  524003 out.go:179] * [auto-151203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:25:57.290977  524003 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:25:57.291120  524003 notify.go:221] Checking for updates...
	I1129 10:25:57.297817  524003 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:25:57.301021  524003 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:57.304143  524003 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:25:57.307341  524003 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:25:57.310435  524003 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:25:57.313968  524003 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:57.314112  524003 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:25:57.344059  524003 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:25:57.344199  524003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:57.402865  524003 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:57.393295851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:57.402985  524003 docker.go:319] overlay module found
	I1129 10:25:57.406241  524003 out.go:179] * Using the docker driver based on user configuration
	I1129 10:25:57.409289  524003 start.go:309] selected driver: docker
	I1129 10:25:57.409315  524003 start.go:927] validating driver "docker" against <nil>
	I1129 10:25:57.409344  524003 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:25:57.410172  524003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:57.468512  524003 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:57.459381899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:57.468671  524003 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 10:25:57.468899  524003 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:25:57.471965  524003 out.go:179] * Using Docker driver with root privileges
	I1129 10:25:57.474991  524003 cni.go:84] Creating CNI manager for ""
	I1129 10:25:57.475071  524003 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:57.475088  524003 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 10:25:57.475179  524003 start.go:353] cluster config:
	{Name:auto-151203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-151203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1129 10:25:57.478334  524003 out.go:179] * Starting "auto-151203" primary control-plane node in "auto-151203" cluster
	I1129 10:25:57.481259  524003 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:25:57.484250  524003 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:25:57.487146  524003 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:57.487206  524003 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:25:57.487220  524003 cache.go:65] Caching tarball of preloaded images
	I1129 10:25:57.487234  524003 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:25:57.487311  524003 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:25:57.487322  524003 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:25:57.487436  524003 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/config.json ...
	I1129 10:25:57.487455  524003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/config.json: {Name:mk235a254a51c1d63a10263d1e1c65333918e47f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:57.506866  524003 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:25:57.506891  524003 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:25:57.506905  524003 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:25:57.506942  524003 start.go:360] acquireMachinesLock for auto-151203: {Name:mk09cb03dea7ff71ca882e7cda6650375f6dc25e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:25:57.507075  524003 start.go:364] duration metric: took 111.73µs to acquireMachinesLock for "auto-151203"
	I1129 10:25:57.507119  524003 start.go:93] Provisioning new machine with config: &{Name:auto-151203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-151203 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:25:57.507199  524003 start.go:125] createHost starting for "" (driver="docker")
	I1129 10:25:57.510754  524003 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 10:25:57.511006  524003 start.go:159] libmachine.API.Create for "auto-151203" (driver="docker")
	I1129 10:25:57.511045  524003 client.go:173] LocalClient.Create starting
	I1129 10:25:57.511121  524003 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 10:25:57.511158  524003 main.go:143] libmachine: Decoding PEM data...
	I1129 10:25:57.511178  524003 main.go:143] libmachine: Parsing certificate...
	I1129 10:25:57.511252  524003 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 10:25:57.511279  524003 main.go:143] libmachine: Decoding PEM data...
	I1129 10:25:57.511297  524003 main.go:143] libmachine: Parsing certificate...
	I1129 10:25:57.511660  524003 cli_runner.go:164] Run: docker network inspect auto-151203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 10:25:57.527539  524003 cli_runner.go:211] docker network inspect auto-151203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 10:25:57.527626  524003 network_create.go:284] running [docker network inspect auto-151203] to gather additional debugging logs...
	I1129 10:25:57.527650  524003 cli_runner.go:164] Run: docker network inspect auto-151203
	W1129 10:25:57.544399  524003 cli_runner.go:211] docker network inspect auto-151203 returned with exit code 1
	I1129 10:25:57.544451  524003 network_create.go:287] error running [docker network inspect auto-151203]: docker network inspect auto-151203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-151203 not found
	I1129 10:25:57.544466  524003 network_create.go:289] output of [docker network inspect auto-151203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-151203 not found
	
	** /stderr **
	I1129 10:25:57.544568  524003 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:25:57.562259  524003 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
	I1129 10:25:57.562695  524003 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf66364546bb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1a:25:6d:94:37:dd} reservation:<nil>}
	I1129 10:25:57.562952  524003 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d78444b552f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:b1:d6:7c:04:eb} reservation:<nil>}
	I1129 10:25:57.563391  524003 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1000}
	I1129 10:25:57.563414  524003 network_create.go:124] attempt to create docker network auto-151203 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 10:25:57.563470  524003 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-151203 auto-151203
	I1129 10:25:57.626052  524003 network_create.go:108] docker network auto-151203 192.168.76.0/24 created
	I1129 10:25:57.626092  524003 kic.go:121] calculated static IP "192.168.76.2" for the "auto-151203" container
	I1129 10:25:57.626170  524003 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 10:25:57.641906  524003 cli_runner.go:164] Run: docker volume create auto-151203 --label name.minikube.sigs.k8s.io=auto-151203 --label created_by.minikube.sigs.k8s.io=true
	I1129 10:25:57.659307  524003 oci.go:103] Successfully created a docker volume auto-151203
	I1129 10:25:57.659396  524003 cli_runner.go:164] Run: docker run --rm --name auto-151203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-151203 --entrypoint /usr/bin/test -v auto-151203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 10:25:58.240981  524003 oci.go:107] Successfully prepared a docker volume auto-151203
	I1129 10:25:58.241058  524003 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:58.241075  524003 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 10:25:58.241162  524003 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-151203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.50361414Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f2e83666-5a2e-43bb-b5de-18b7c883dbf5 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.506814622Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=46cc8475-fd51-4718-adcc-6b4c8bb8551f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.50695192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.521869308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.523198842Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a9e6cf1b748a550eac08711f551accdeb51584ec18d29afff77a6d66384c7d81/merged/etc/passwd: no such file or directory"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.523343229Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a9e6cf1b748a550eac08711f551accdeb51584ec18d29afff77a6d66384c7d81/merged/etc/group: no such file or directory"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.523686593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.561021791Z" level=info msg="Created container f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0: kube-system/storage-provisioner/storage-provisioner" id=46cc8475-fd51-4718-adcc-6b4c8bb8551f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.562502375Z" level=info msg="Starting container: f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0" id=41afb9fb-610c-4d98-96a4-4465c4d503e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.565228413Z" level=info msg="Started container" PID=1641 containerID=f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0 description=kube-system/storage-provisioner/storage-provisioner id=41afb9fb-610c-4d98-96a4-4465c4d503e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e134d07fe49ce545563b10c6f804457c5c73ad6fbbe8f72e49447ccadb371375
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.33857475Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.342274109Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.342306331Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.342331103Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.346218173Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.346393756Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.346470401Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.350674644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.350865974Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.350954016Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.357067377Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.357254776Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.357384435Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.365486587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.365699628Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f66d514a2eb92       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   e134d07fe49ce       storage-provisioner                                    kube-system
	2ae519b1dac1d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   7ffb840300972       dashboard-metrics-scraper-6ffb444bf9-rk7jz             kubernetes-dashboard
	71c27d6cbb9d4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   e71466fa1a862       kubernetes-dashboard-855c9754f9-fxsbl                  kubernetes-dashboard
	4156a03d44719       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   696eb62c8bc7d       coredns-66bc5c9577-8rvzs                               kube-system
	4cb823a840ce0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago       Running             busybox                     1                   374f0d9221484       busybox                                                default
	d4349b4db02db       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   99707f787c896       kube-proxy-68szw                                       kube-system
	dd920d356015c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago       Exited              storage-provisioner         1                   e134d07fe49ce       storage-provisioner                                    kube-system
	80376f1b84a82       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   4c44c9172ae57       kindnet-7xnqr                                          kube-system
	ce0fd82d0bd79       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1012dcf01e3db       kube-controller-manager-default-k8s-diff-port-194354   kube-system
	780974d2b2f4b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   48b10c8ecf1e2       kube-apiserver-default-k8s-diff-port-194354            kube-system
	022e047748e69       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6ee324d4e76e7       etcd-default-k8s-diff-port-194354                      kube-system
	63793328bc875       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0099bb686601a       kube-scheduler-default-k8s-diff-port-194354            kube-system
	
	
	==> coredns [4156a03d4471982d7f1e0ee87abbd4c23d7ad47acfd3d7c9f78e30a38f482262] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46438 - 29375 "HINFO IN 1137683522763094149.9106790434843673739. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026745254s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-194354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-194354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-194354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_23_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:23:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-194354
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:25:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:24:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-194354
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                c5e28edc-52c7-4b90-b67b-b957ca9e0425
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-8rvzs                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-194354                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-7xnqr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-194354             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-194354    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-68szw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-194354             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rk7jz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fxsbl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-194354 event: Registered Node default-k8s-diff-port-194354 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-194354 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)      kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)      kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)      kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node default-k8s-diff-port-194354 event: Registered Node default-k8s-diff-port-194354 in Controller
	
	
	==> dmesg <==
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	[Nov29 10:25] overlayfs: idmapped layers are currently not supported
	[  +6.600462] overlayfs: idmapped layers are currently not supported
	[ +33.077974] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [022e047748e69418af4ebc42eb96a45df83b9d1d7f5c7d95684372ff9198d7ca] <==
	{"level":"warn","ts":"2025-11-29T10:25:07.958671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.009556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.094410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.124551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.178617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.208198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.240124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.266262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.286680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.329671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.369099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.403583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.432956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.471696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.514021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.554138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.588454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.631161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.662275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.749223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.819236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.872557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.888214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.930710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:09.046213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:26:04 up  3:08,  0 user,  load average: 5.85, 4.48, 3.18
	Linux default-k8s-diff-port-194354 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [80376f1b84a8212571ab445745322c485da4dee9d893fccb971c5a4a8628bad1] <==
	I1129 10:25:13.123522       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:25:13.123735       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:25:13.123841       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:25:13.123854       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:25:13.123868       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:25:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:25:13.343849       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:25:13.343870       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:25:13.343879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:25:13.344005       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:25:43.343346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:25:43.343550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:25:43.343729       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:25:43.343869       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:25:44.844701       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:25:44.844823       1 metrics.go:72] Registering metrics
	I1129 10:25:44.844927       1 controller.go:711] "Syncing nftables rules"
	I1129 10:25:53.338169       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:25:53.338305       1 main.go:301] handling current node
	I1129 10:26:03.342377       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:26:03.342408       1 main.go:301] handling current node
	
	
	==> kube-apiserver [780974d2b2f4b2a8795f2a71e0983d493f7a5959e65c3f800b7e7bed5c5841be] <==
	I1129 10:25:11.414495       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 10:25:11.422998       1 policy_source.go:240] refreshing policies
	I1129 10:25:11.423646       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:25:11.414160       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:25:11.414412       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:25:11.425620       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1129 10:25:11.425887       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:25:11.425993       1 aggregator.go:171] initial CRD sync complete...
	I1129 10:25:11.426028       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 10:25:11.426057       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:25:11.430215       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:25:11.458319       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:25:11.459484       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:25:11.470437       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1129 10:25:11.481151       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:25:12.626801       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:25:13.427290       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:25:13.769716       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:25:13.832746       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:25:13.931010       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:25:14.114322       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.152.58"}
	I1129 10:25:14.140810       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.56.227"}
	I1129 10:25:15.777492       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:25:15.826957       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:25:15.881882       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ce0fd82d0bd79a2222344eb64f283a2f997b836dc9783c79d7896af82a254d18] <==
	I1129 10:25:15.694937       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:25:15.704291       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:25:15.704673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 10:25:15.708786       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:25:15.708950       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:25:15.709068       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-194354"
	I1129 10:25:15.709153       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 10:25:15.709730       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:15.709805       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:25:15.709844       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:25:15.709788       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 10:25:15.715108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:15.720207       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:25:15.720448       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 10:25:15.720532       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:25:15.724995       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 10:25:15.733940       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 10:25:15.734106       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 10:25:15.734183       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 10:25:15.734216       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 10:25:15.734245       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 10:25:15.735627       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 10:25:15.735723       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:25:15.740077       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:25:15.752445       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [d4349b4db02dbc959452e72b01b679dd6797bd59f8e1c1f1e9ceeba80768722c] <==
	I1129 10:25:13.953387       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:25:14.236578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:25:14.338993       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:25:14.339034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 10:25:14.339108       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:25:14.599462       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:25:14.599586       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:25:14.640610       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:25:14.640967       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:25:14.641028       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:14.646487       1 config.go:200] "Starting service config controller"
	I1129 10:25:14.648925       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:25:14.656975       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:25:14.657070       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:25:14.657117       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:25:14.657165       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:25:14.657894       1 config.go:309] "Starting node config controller"
	I1129 10:25:14.662182       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:25:14.662258       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:25:14.753665       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:25:14.757901       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:25:14.757961       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [63793328bc8752412204a8263047290b0453435f744f79a3ca344412702eda5f] <==
	I1129 10:25:11.145521       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:11.209719       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:11.222725       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:11.222677       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:25:11.222703       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 10:25:11.350708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 10:25:11.351074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:25:11.351157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 10:25:11.351224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:25:11.351272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:25:11.351322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:25:11.351368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 10:25:11.351422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:25:11.351474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:25:11.351522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:25:11.351564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 10:25:11.351613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 10:25:11.351662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:25:11.351718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 10:25:11.351814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:25:11.351847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 10:25:11.351888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:25:11.351929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:25:11.351983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1129 10:25:12.236925       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.386904     790 projected.go:196] Error preparing data for projected volume kube-api-access-bsqqc for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.386978     790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/576f53f6-546e-4dda-9d24-84453d5864d0-kube-api-access-bsqqc podName:576f53f6-546e-4dda-9d24-84453d5864d0 nodeName:}" failed. No retries permitted until 2025-11-29 10:25:17.886957116 +0000 UTC m=+17.348219006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bsqqc" (UniqueName: "kubernetes.io/projected/576f53f6-546e-4dda-9d24-84453d5864d0-kube-api-access-bsqqc") pod "dashboard-metrics-scraper-6ffb444bf9-rk7jz" (UID: "576f53f6-546e-4dda-9d24-84453d5864d0") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.471348     790 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.471547     790 projected.go:196] Error preparing data for projected volume kube-api-access-kvp5m for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fxsbl: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.471673     790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/286ab319-2221-4ac1-9d62-92ceeb4e7c1d-kube-api-access-kvp5m podName:286ab319-2221-4ac1-9d62-92ceeb4e7c1d nodeName:}" failed. No retries permitted until 2025-11-29 10:25:17.971650521 +0000 UTC m=+17.432912412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kvp5m" (UniqueName: "kubernetes.io/projected/286ab319-2221-4ac1-9d62-92ceeb4e7c1d-kube-api-access-kvp5m") pod "kubernetes-dashboard-855c9754f9-fxsbl" (UID: "286ab319-2221-4ac1-9d62-92ceeb4e7c1d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:18 default-k8s-diff-port-194354 kubelet[790]: W1129 10:25:18.121799     790 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/crio-e71466fa1a862a72c08653dbe7655e55e57913878245d3f1bf26c60bd3d39e99 WatchSource:0}: Error finding container e71466fa1a862a72c08653dbe7655e55e57913878245d3f1bf26c60bd3d39e99: Status 404 returned error can't find the container with id e71466fa1a862a72c08653dbe7655e55e57913878245d3f1bf26c60bd3d39e99
	Nov 29 10:25:24 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:24.427490     790 scope.go:117] "RemoveContainer" containerID="3645cf8c978c5087f1f339c9397576f7d89a417f8b6fc642cb5e6b9e4210e339"
	Nov 29 10:25:25 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:25.442484     790 scope.go:117] "RemoveContainer" containerID="3645cf8c978c5087f1f339c9397576f7d89a417f8b6fc642cb5e6b9e4210e339"
	Nov 29 10:25:25 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:25.442800     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:25 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:25.442953     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:26 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:26.449292     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:26 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:26.449464     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:28 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:28.049634     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:28 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:28.049880     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:38 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:38.975199     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:39.485417     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:39.486128     790 scope.go:117] "RemoveContainer" containerID="2ae519b1dac1d77107d89fadfee08d2e54f6ad3bf37e38b5789e75e9faadb05b"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:39.487515     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:39.517367     790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fxsbl" podStartSLOduration=10.122447307 podStartE2EDuration="23.51734952s" podCreationTimestamp="2025-11-29 10:25:16 +0000 UTC" firstStartedPulling="2025-11-29 10:25:18.124724991 +0000 UTC m=+17.585986882" lastFinishedPulling="2025-11-29 10:25:31.519627196 +0000 UTC m=+30.980889095" observedRunningTime="2025-11-29 10:25:32.480820133 +0000 UTC m=+31.942082040" watchObservedRunningTime="2025-11-29 10:25:39.51734952 +0000 UTC m=+38.978611411"
	Nov 29 10:25:43 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:43.500084     790 scope.go:117] "RemoveContainer" containerID="dd920d356015c1b7811b703a9edaeae17d4fd173b3aa9e4482b4ae163c2cd1dd"
	Nov 29 10:25:48 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:48.050240     790 scope.go:117] "RemoveContainer" containerID="2ae519b1dac1d77107d89fadfee08d2e54f6ad3bf37e38b5789e75e9faadb05b"
	Nov 29 10:25:48 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:48.050889     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:26:00 default-k8s-diff-port-194354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:26:00 default-k8s-diff-port-194354 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:26:00 default-k8s-diff-port-194354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [71c27d6cbb9d418c72d89e546ad23a2171f1d6e4642d3ed24033fdf16a87b5d4] <==
	2025/11/29 10:25:31 Starting overwatch
	2025/11/29 10:25:31 Using namespace: kubernetes-dashboard
	2025/11/29 10:25:31 Using in-cluster config to connect to apiserver
	2025/11/29 10:25:31 Using secret token for csrf signing
	2025/11/29 10:25:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:25:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:25:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 10:25:31 Generating JWE encryption key
	2025/11/29 10:25:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:25:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:25:32 Initializing JWE encryption key from synchronized object
	2025/11/29 10:25:32 Creating in-cluster Sidecar client
	2025/11/29 10:25:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:25:32 Serving insecurely on HTTP port: 9090
	2025/11/29 10:26:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [dd920d356015c1b7811b703a9edaeae17d4fd173b3aa9e4482b4ae163c2cd1dd] <==
	I1129 10:25:13.180489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:25:43.223459       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0] <==
	I1129 10:25:43.610671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:25:43.636337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:25:43.642210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:25:43.647213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:47.102618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:51.363024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:54.962398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:58.015810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:01.038201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:01.051821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:26:01.052056       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:26:01.054245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-194354_d24bf3e4-4abb-4bb8-91b6-aa14e8b65f30!
	I1129 10:26:01.055925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55fa2c2d-ed1d-4d3e-8f29-7e39e322961c", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-194354_d24bf3e4-4abb-4bb8-91b6-aa14e8b65f30 became leader
	W1129 10:26:01.062409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:01.076953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:26:01.154734       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-194354_d24bf3e4-4abb-4bb8-91b6-aa14e8b65f30!
	W1129 10:26:03.080658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:03.090280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354: exit status 2 (376.118023ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-194354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-194354
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-194354:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88",
	        "Created": "2025-11-29T10:23:08.777622833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516871,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T10:24:52.90493071Z",
	            "FinishedAt": "2025-11-29T10:24:51.954656482Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/hostname",
	        "HostsPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/hosts",
	        "LogPath": "/var/lib/docker/containers/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88-json.log",
	        "Name": "/default-k8s-diff-port-194354",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-194354:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-194354",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88",
	                "LowerDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43-init/diff:/var/lib/docker/overlay2/ce26ffc928d32ef81a09764abb37a70a2f06530a814d74bb1a8c0eb674eaada7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c43d2b579956ac5fc245025f3524f146fa0f6c6a40a59dba04a332f4c13ab43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-194354",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-194354/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-194354",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-194354",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-194354",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1820d94bfdf2e9ded26099965a20b40a34319f1178b06ac744357a6a1c9d6a62",
	            "SandboxKey": "/var/run/docker/netns/1820d94bfdf2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-194354": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:ac:c9:cd:0a:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57a57979b7c8de5b2d73e81501e805dfbd816f410a202f054d691d84e66ed18d",
	                    "EndpointID": "6410c302361753d64e95efbcf8d8beb5ef633e91d40f38e815b49203085ac0b9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-194354",
	                        "4c5ba5cc2474"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354: exit status 2 (358.7706ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-194354 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-194354 logs -n 25: (1.229941449s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-949993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │                     │
	│ stop    │ -p no-preload-949993 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ addons  │ enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:23 UTC │
	│ start   │ -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:23 UTC │ 29 Nov 25 10:24 UTC │
	│ image   │ no-preload-949993 image list --format=json                                                                                                                                                                                                    │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ pause   │ -p no-preload-949993 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-194354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │                     │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ stop    │ -p default-k8s-diff-port-194354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ delete  │ -p no-preload-949993                                                                                                                                                                                                                          │ no-preload-949993            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-194354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:24 UTC │
	│ start   │ -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:24 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-156330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ stop    │ -p newest-cni-156330 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-156330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ start   │ -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ image   │ newest-cni-156330 image list --format=json                                                                                                                                                                                                    │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ pause   │ -p newest-cni-156330 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ delete  │ -p newest-cni-156330                                                                                                                                                                                                                          │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ delete  │ -p newest-cni-156330                                                                                                                                                                                                                          │ newest-cni-156330            │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ start   │ -p auto-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-151203                  │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	│ image   │ default-k8s-diff-port-194354 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │ 29 Nov 25 10:25 UTC │
	│ pause   │ -p default-k8s-diff-port-194354 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-194354 │ jenkins │ v1.37.0 │ 29 Nov 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 10:25:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 10:25:57.281255  524003 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:25:57.281434  524003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:57.281445  524003 out.go:374] Setting ErrFile to fd 2...
	I1129 10:25:57.281451  524003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:25:57.281698  524003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:25:57.282315  524003 out.go:368] Setting JSON to false
	I1129 10:25:57.283272  524003 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11307,"bootTime":1764400651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:25:57.283342  524003 start.go:143] virtualization:  
	I1129 10:25:57.287551  524003 out.go:179] * [auto-151203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:25:57.290977  524003 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:25:57.291120  524003 notify.go:221] Checking for updates...
	I1129 10:25:57.297817  524003 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:25:57.301021  524003 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:25:57.304143  524003 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:25:57.307341  524003 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:25:57.310435  524003 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:25:57.313968  524003 config.go:182] Loaded profile config "default-k8s-diff-port-194354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:25:57.314112  524003 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:25:57.344059  524003 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:25:57.344199  524003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:57.402865  524003 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:57.393295851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:57.402985  524003 docker.go:319] overlay module found
	I1129 10:25:57.406241  524003 out.go:179] * Using the docker driver based on user configuration
	I1129 10:25:57.409289  524003 start.go:309] selected driver: docker
	I1129 10:25:57.409315  524003 start.go:927] validating driver "docker" against <nil>
	I1129 10:25:57.409344  524003 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:25:57.410172  524003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:25:57.468512  524003 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:25:57.459381899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:25:57.468671  524003 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 10:25:57.468899  524003 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 10:25:57.471965  524003 out.go:179] * Using Docker driver with root privileges
	I1129 10:25:57.474991  524003 cni.go:84] Creating CNI manager for ""
	I1129 10:25:57.475071  524003 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 10:25:57.475088  524003 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 10:25:57.475179  524003 start.go:353] cluster config:
	{Name:auto-151203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-151203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1129 10:25:57.478334  524003 out.go:179] * Starting "auto-151203" primary control-plane node in "auto-151203" cluster
	I1129 10:25:57.481259  524003 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 10:25:57.484250  524003 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 10:25:57.487146  524003 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:57.487206  524003 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1129 10:25:57.487220  524003 cache.go:65] Caching tarball of preloaded images
	I1129 10:25:57.487234  524003 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 10:25:57.487311  524003 preload.go:238] Found /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1129 10:25:57.487322  524003 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 10:25:57.487436  524003 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/config.json ...
	I1129 10:25:57.487455  524003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/config.json: {Name:mk235a254a51c1d63a10263d1e1c65333918e47f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 10:25:57.506866  524003 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 10:25:57.506891  524003 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 10:25:57.506905  524003 cache.go:243] Successfully downloaded all kic artifacts
	I1129 10:25:57.506942  524003 start.go:360] acquireMachinesLock for auto-151203: {Name:mk09cb03dea7ff71ca882e7cda6650375f6dc25e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 10:25:57.507075  524003 start.go:364] duration metric: took 111.73µs to acquireMachinesLock for "auto-151203"
	I1129 10:25:57.507119  524003 start.go:93] Provisioning new machine with config: &{Name:auto-151203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-151203 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 10:25:57.507199  524003 start.go:125] createHost starting for "" (driver="docker")
	I1129 10:25:57.510754  524003 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 10:25:57.511006  524003 start.go:159] libmachine.API.Create for "auto-151203" (driver="docker")
	I1129 10:25:57.511045  524003 client.go:173] LocalClient.Create starting
	I1129 10:25:57.511121  524003 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/ca.pem
	I1129 10:25:57.511158  524003 main.go:143] libmachine: Decoding PEM data...
	I1129 10:25:57.511178  524003 main.go:143] libmachine: Parsing certificate...
	I1129 10:25:57.511252  524003 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-300311/.minikube/certs/cert.pem
	I1129 10:25:57.511279  524003 main.go:143] libmachine: Decoding PEM data...
	I1129 10:25:57.511297  524003 main.go:143] libmachine: Parsing certificate...
	I1129 10:25:57.511660  524003 cli_runner.go:164] Run: docker network inspect auto-151203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 10:25:57.527539  524003 cli_runner.go:211] docker network inspect auto-151203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 10:25:57.527626  524003 network_create.go:284] running [docker network inspect auto-151203] to gather additional debugging logs...
	I1129 10:25:57.527650  524003 cli_runner.go:164] Run: docker network inspect auto-151203
	W1129 10:25:57.544399  524003 cli_runner.go:211] docker network inspect auto-151203 returned with exit code 1
	I1129 10:25:57.544451  524003 network_create.go:287] error running [docker network inspect auto-151203]: docker network inspect auto-151203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-151203 not found
	I1129 10:25:57.544466  524003 network_create.go:289] output of [docker network inspect auto-151203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-151203 not found
	
	** /stderr **
	I1129 10:25:57.544568  524003 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 10:25:57.562259  524003 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
	I1129 10:25:57.562695  524003 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf66364546bb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1a:25:6d:94:37:dd} reservation:<nil>}
	I1129 10:25:57.562952  524003 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d78444b552f4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:b1:d6:7c:04:eb} reservation:<nil>}
	I1129 10:25:57.563391  524003 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1000}
	I1129 10:25:57.563414  524003 network_create.go:124] attempt to create docker network auto-151203 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 10:25:57.563470  524003 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-151203 auto-151203
	I1129 10:25:57.626052  524003 network_create.go:108] docker network auto-151203 192.168.76.0/24 created
	I1129 10:25:57.626092  524003 kic.go:121] calculated static IP "192.168.76.2" for the "auto-151203" container
	I1129 10:25:57.626170  524003 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 10:25:57.641906  524003 cli_runner.go:164] Run: docker volume create auto-151203 --label name.minikube.sigs.k8s.io=auto-151203 --label created_by.minikube.sigs.k8s.io=true
	I1129 10:25:57.659307  524003 oci.go:103] Successfully created a docker volume auto-151203
	I1129 10:25:57.659396  524003 cli_runner.go:164] Run: docker run --rm --name auto-151203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-151203 --entrypoint /usr/bin/test -v auto-151203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 10:25:58.240981  524003 oci.go:107] Successfully prepared a docker volume auto-151203
	I1129 10:25:58.241058  524003 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 10:25:58.241075  524003 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 10:25:58.241162  524003 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-151203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.50361414Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f2e83666-5a2e-43bb-b5de-18b7c883dbf5 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.506814622Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=46cc8475-fd51-4718-adcc-6b4c8bb8551f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.50695192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.521869308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.523198842Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a9e6cf1b748a550eac08711f551accdeb51584ec18d29afff77a6d66384c7d81/merged/etc/passwd: no such file or directory"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.523343229Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a9e6cf1b748a550eac08711f551accdeb51584ec18d29afff77a6d66384c7d81/merged/etc/group: no such file or directory"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.523686593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.561021791Z" level=info msg="Created container f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0: kube-system/storage-provisioner/storage-provisioner" id=46cc8475-fd51-4718-adcc-6b4c8bb8551f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.562502375Z" level=info msg="Starting container: f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0" id=41afb9fb-610c-4d98-96a4-4465c4d503e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 10:25:43 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:43.565228413Z" level=info msg="Started container" PID=1641 containerID=f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0 description=kube-system/storage-provisioner/storage-provisioner id=41afb9fb-610c-4d98-96a4-4465c4d503e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e134d07fe49ce545563b10c6f804457c5c73ad6fbbe8f72e49447ccadb371375
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.33857475Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.342274109Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.342306331Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.342331103Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.346218173Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.346393756Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.346470401Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.350674644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.350865974Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.350954016Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.357067377Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.357254776Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.357384435Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.365486587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 10:25:53 default-k8s-diff-port-194354 crio[657]: time="2025-11-29T10:25:53.365699628Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f66d514a2eb92       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   e134d07fe49ce       storage-provisioner                                    kube-system
	2ae519b1dac1d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   7ffb840300972       dashboard-metrics-scraper-6ffb444bf9-rk7jz             kubernetes-dashboard
	71c27d6cbb9d4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   e71466fa1a862       kubernetes-dashboard-855c9754f9-fxsbl                  kubernetes-dashboard
	4156a03d44719       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   696eb62c8bc7d       coredns-66bc5c9577-8rvzs                               kube-system
	4cb823a840ce0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   374f0d9221484       busybox                                                default
	d4349b4db02db       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   99707f787c896       kube-proxy-68szw                                       kube-system
	dd920d356015c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   e134d07fe49ce       storage-provisioner                                    kube-system
	80376f1b84a82       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   4c44c9172ae57       kindnet-7xnqr                                          kube-system
	ce0fd82d0bd79       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1012dcf01e3db       kube-controller-manager-default-k8s-diff-port-194354   kube-system
	780974d2b2f4b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   48b10c8ecf1e2       kube-apiserver-default-k8s-diff-port-194354            kube-system
	022e047748e69       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6ee324d4e76e7       etcd-default-k8s-diff-port-194354                      kube-system
	63793328bc875       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0099bb686601a       kube-scheduler-default-k8s-diff-port-194354            kube-system
	
	
	==> coredns [4156a03d4471982d7f1e0ee87abbd4c23d7ad47acfd3d7c9f78e30a38f482262] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46438 - 29375 "HINFO IN 1137683522763094149.9106790434843673739. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026745254s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-194354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-194354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-194354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T10_23_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 10:23:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-194354
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 10:25:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:23:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 10:25:32 +0000   Sat, 29 Nov 2025 10:24:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-194354
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                c5e28edc-52c7-4b90-b67b-b957ca9e0425
	  Boot ID:                    3e935453-6545-4280-8591-96d23ae43c03
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-8rvzs                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-194354                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-7xnqr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-194354             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-194354    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-68szw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-194354             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rk7jz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fxsbl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   Starting                 2m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-194354 event: Registered Node default-k8s-diff-port-194354 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-194354 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 66s)      kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 66s)      kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 66s)      kubelet          Node default-k8s-diff-port-194354 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node default-k8s-diff-port-194354 event: Registered Node default-k8s-diff-port-194354 in Controller
	
	
	==> dmesg <==
	[  +0.578545] overlayfs: idmapped layers are currently not supported
	[Nov29 09:58] overlayfs: idmapped layers are currently not supported
	[ +18.035391] overlayfs: idmapped layers are currently not supported
	[Nov29 09:59] overlayfs: idmapped layers are currently not supported
	[Nov29 10:00] overlayfs: idmapped layers are currently not supported
	[Nov29 10:01] overlayfs: idmapped layers are currently not supported
	[Nov29 10:02] overlayfs: idmapped layers are currently not supported
	[Nov29 10:04] overlayfs: idmapped layers are currently not supported
	[Nov29 10:06] overlayfs: idmapped layers are currently not supported
	[ +25.628376] overlayfs: idmapped layers are currently not supported
	[Nov29 10:13] overlayfs: idmapped layers are currently not supported
	[Nov29 10:15] overlayfs: idmapped layers are currently not supported
	[ +49.339569] overlayfs: idmapped layers are currently not supported
	[Nov29 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.293758] overlayfs: idmapped layers are currently not supported
	[Nov29 10:17] overlayfs: idmapped layers are currently not supported
	[Nov29 10:18] overlayfs: idmapped layers are currently not supported
	[Nov29 10:20] overlayfs: idmapped layers are currently not supported
	[Nov29 10:21] overlayfs: idmapped layers are currently not supported
	[Nov29 10:22] overlayfs: idmapped layers are currently not supported
	[Nov29 10:23] overlayfs: idmapped layers are currently not supported
	[  +5.579159] overlayfs: idmapped layers are currently not supported
	[Nov29 10:25] overlayfs: idmapped layers are currently not supported
	[  +6.600462] overlayfs: idmapped layers are currently not supported
	[ +33.077974] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [022e047748e69418af4ebc42eb96a45df83b9d1d7f5c7d95684372ff9198d7ca] <==
	{"level":"warn","ts":"2025-11-29T10:25:07.958671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.009556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.094410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.124551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.178617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.208198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.240124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.266262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.286680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.329671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.369099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.403583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.432956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.471696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.514021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.554138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.588454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.631161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.662275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.749223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.819236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.872557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.888214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:08.930710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T10:25:09.046213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:26:06 up  3:08,  0 user,  load average: 5.71, 4.47, 3.19
	Linux default-k8s-diff-port-194354 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [80376f1b84a8212571ab445745322c485da4dee9d893fccb971c5a4a8628bad1] <==
	I1129 10:25:13.123522       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 10:25:13.123735       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 10:25:13.123841       1 main.go:148] setting mtu 1500 for CNI 
	I1129 10:25:13.123854       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 10:25:13.123868       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T10:25:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 10:25:13.343849       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 10:25:13.343870       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 10:25:13.343879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 10:25:13.344005       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 10:25:43.343346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 10:25:43.343550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 10:25:43.343729       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 10:25:43.343869       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 10:25:44.844701       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 10:25:44.844823       1 metrics.go:72] Registering metrics
	I1129 10:25:44.844927       1 controller.go:711] "Syncing nftables rules"
	I1129 10:25:53.338169       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:25:53.338305       1 main.go:301] handling current node
	I1129 10:26:03.342377       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 10:26:03.342408       1 main.go:301] handling current node
	
	
	==> kube-apiserver [780974d2b2f4b2a8795f2a71e0983d493f7a5959e65c3f800b7e7bed5c5841be] <==
	I1129 10:25:11.414495       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 10:25:11.422998       1 policy_source.go:240] refreshing policies
	I1129 10:25:11.423646       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 10:25:11.414160       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 10:25:11.414412       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 10:25:11.425620       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1129 10:25:11.425887       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 10:25:11.425993       1 aggregator.go:171] initial CRD sync complete...
	I1129 10:25:11.426028       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 10:25:11.426057       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 10:25:11.430215       1 cache.go:39] Caches are synced for autoregister controller
	I1129 10:25:11.458319       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 10:25:11.459484       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 10:25:11.470437       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1129 10:25:11.481151       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 10:25:12.626801       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 10:25:13.427290       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 10:25:13.769716       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 10:25:13.832746       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 10:25:13.931010       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 10:25:14.114322       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.152.58"}
	I1129 10:25:14.140810       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.56.227"}
	I1129 10:25:15.777492       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 10:25:15.826957       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 10:25:15.881882       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ce0fd82d0bd79a2222344eb64f283a2f997b836dc9783c79d7896af82a254d18] <==
	I1129 10:25:15.694937       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 10:25:15.704291       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 10:25:15.704673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 10:25:15.708786       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 10:25:15.708950       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 10:25:15.709068       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-194354"
	I1129 10:25:15.709153       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 10:25:15.709730       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:15.709805       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 10:25:15.709844       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 10:25:15.709788       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 10:25:15.715108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 10:25:15.720207       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 10:25:15.720448       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 10:25:15.720532       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 10:25:15.724995       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 10:25:15.733940       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 10:25:15.734106       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 10:25:15.734183       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 10:25:15.734216       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 10:25:15.734245       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 10:25:15.735627       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 10:25:15.735723       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 10:25:15.740077       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 10:25:15.752445       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [d4349b4db02dbc959452e72b01b679dd6797bd59f8e1c1f1e9ceeba80768722c] <==
	I1129 10:25:13.953387       1 server_linux.go:53] "Using iptables proxy"
	I1129 10:25:14.236578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 10:25:14.338993       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 10:25:14.339034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 10:25:14.339108       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 10:25:14.599462       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 10:25:14.599586       1 server_linux.go:132] "Using iptables Proxier"
	I1129 10:25:14.640610       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 10:25:14.640967       1 server.go:527] "Version info" version="v1.34.1"
	I1129 10:25:14.641028       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:14.646487       1 config.go:200] "Starting service config controller"
	I1129 10:25:14.648925       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 10:25:14.656975       1 config.go:106] "Starting endpoint slice config controller"
	I1129 10:25:14.657070       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 10:25:14.657117       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 10:25:14.657165       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 10:25:14.657894       1 config.go:309] "Starting node config controller"
	I1129 10:25:14.662182       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 10:25:14.662258       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 10:25:14.753665       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 10:25:14.757901       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 10:25:14.757961       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [63793328bc8752412204a8263047290b0453435f744f79a3ca344412702eda5f] <==
	I1129 10:25:11.145521       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 10:25:11.209719       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:11.222725       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 10:25:11.222677       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 10:25:11.222703       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 10:25:11.350708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1129 10:25:11.351074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 10:25:11.351157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 10:25:11.351224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 10:25:11.351272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 10:25:11.351322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 10:25:11.351368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 10:25:11.351422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 10:25:11.351474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 10:25:11.351522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 10:25:11.351564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 10:25:11.351613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 10:25:11.351662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 10:25:11.351718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 10:25:11.351814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 10:25:11.351847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 10:25:11.351888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 10:25:11.351929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 10:25:11.351983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1129 10:25:12.236925       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.386904     790 projected.go:196] Error preparing data for projected volume kube-api-access-bsqqc for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.386978     790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/576f53f6-546e-4dda-9d24-84453d5864d0-kube-api-access-bsqqc podName:576f53f6-546e-4dda-9d24-84453d5864d0 nodeName:}" failed. No retries permitted until 2025-11-29 10:25:17.886957116 +0000 UTC m=+17.348219006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bsqqc" (UniqueName: "kubernetes.io/projected/576f53f6-546e-4dda-9d24-84453d5864d0-kube-api-access-bsqqc") pod "dashboard-metrics-scraper-6ffb444bf9-rk7jz" (UID: "576f53f6-546e-4dda-9d24-84453d5864d0") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.471348     790 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.471547     790 projected.go:196] Error preparing data for projected volume kube-api-access-kvp5m for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fxsbl: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:17 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:17.471673     790 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/286ab319-2221-4ac1-9d62-92ceeb4e7c1d-kube-api-access-kvp5m podName:286ab319-2221-4ac1-9d62-92ceeb4e7c1d nodeName:}" failed. No retries permitted until 2025-11-29 10:25:17.971650521 +0000 UTC m=+17.432912412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kvp5m" (UniqueName: "kubernetes.io/projected/286ab319-2221-4ac1-9d62-92ceeb4e7c1d-kube-api-access-kvp5m") pod "kubernetes-dashboard-855c9754f9-fxsbl" (UID: "286ab319-2221-4ac1-9d62-92ceeb4e7c1d") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 10:25:18 default-k8s-diff-port-194354 kubelet[790]: W1129 10:25:18.121799     790 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4c5ba5cc24744d5d3dd8f3edb4014151da76452e2bbdc9a5e8e4961d763dbd88/crio-e71466fa1a862a72c08653dbe7655e55e57913878245d3f1bf26c60bd3d39e99 WatchSource:0}: Error finding container e71466fa1a862a72c08653dbe7655e55e57913878245d3f1bf26c60bd3d39e99: Status 404 returned error can't find the container with id e71466fa1a862a72c08653dbe7655e55e57913878245d3f1bf26c60bd3d39e99
	Nov 29 10:25:24 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:24.427490     790 scope.go:117] "RemoveContainer" containerID="3645cf8c978c5087f1f339c9397576f7d89a417f8b6fc642cb5e6b9e4210e339"
	Nov 29 10:25:25 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:25.442484     790 scope.go:117] "RemoveContainer" containerID="3645cf8c978c5087f1f339c9397576f7d89a417f8b6fc642cb5e6b9e4210e339"
	Nov 29 10:25:25 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:25.442800     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:25 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:25.442953     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:26 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:26.449292     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:26 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:26.449464     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:28 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:28.049634     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:28 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:28.049880     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:38 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:38.975199     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:39.485417     790 scope.go:117] "RemoveContainer" containerID="79f5c9e4750dde577c93fd949fdc99c1306e61e85a8c1fe2e07cfe68729a8d79"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:39.486128     790 scope.go:117] "RemoveContainer" containerID="2ae519b1dac1d77107d89fadfee08d2e54f6ad3bf37e38b5789e75e9faadb05b"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:39.487515     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:25:39 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:39.517367     790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fxsbl" podStartSLOduration=10.122447307 podStartE2EDuration="23.51734952s" podCreationTimestamp="2025-11-29 10:25:16 +0000 UTC" firstStartedPulling="2025-11-29 10:25:18.124724991 +0000 UTC m=+17.585986882" lastFinishedPulling="2025-11-29 10:25:31.519627196 +0000 UTC m=+30.980889095" observedRunningTime="2025-11-29 10:25:32.480820133 +0000 UTC m=+31.942082040" watchObservedRunningTime="2025-11-29 10:25:39.51734952 +0000 UTC m=+38.978611411"
	Nov 29 10:25:43 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:43.500084     790 scope.go:117] "RemoveContainer" containerID="dd920d356015c1b7811b703a9edaeae17d4fd173b3aa9e4482b4ae163c2cd1dd"
	Nov 29 10:25:48 default-k8s-diff-port-194354 kubelet[790]: I1129 10:25:48.050240     790 scope.go:117] "RemoveContainer" containerID="2ae519b1dac1d77107d89fadfee08d2e54f6ad3bf37e38b5789e75e9faadb05b"
	Nov 29 10:25:48 default-k8s-diff-port-194354 kubelet[790]: E1129 10:25:48.050889     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rk7jz_kubernetes-dashboard(576f53f6-546e-4dda-9d24-84453d5864d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rk7jz" podUID="576f53f6-546e-4dda-9d24-84453d5864d0"
	Nov 29 10:26:00 default-k8s-diff-port-194354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 10:26:00 default-k8s-diff-port-194354 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 10:26:00 default-k8s-diff-port-194354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [71c27d6cbb9d418c72d89e546ad23a2171f1d6e4642d3ed24033fdf16a87b5d4] <==
	2025/11/29 10:25:31 Starting overwatch
	2025/11/29 10:25:31 Using namespace: kubernetes-dashboard
	2025/11/29 10:25:31 Using in-cluster config to connect to apiserver
	2025/11/29 10:25:31 Using secret token for csrf signing
	2025/11/29 10:25:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 10:25:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 10:25:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 10:25:31 Generating JWE encryption key
	2025/11/29 10:25:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 10:25:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 10:25:32 Initializing JWE encryption key from synchronized object
	2025/11/29 10:25:32 Creating in-cluster Sidecar client
	2025/11/29 10:25:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 10:25:32 Serving insecurely on HTTP port: 9090
	2025/11/29 10:26:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [dd920d356015c1b7811b703a9edaeae17d4fd173b3aa9e4482b4ae163c2cd1dd] <==
	I1129 10:25:13.180489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 10:25:43.223459       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f66d514a2eb92aecfe4118e64c040b15fca9d66ef44642d08ade309a717c1ce0] <==
	I1129 10:25:43.610671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 10:25:43.636337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 10:25:43.642210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 10:25:43.647213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:47.102618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:51.363024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:54.962398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:25:58.015810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:01.038201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:01.051821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:26:01.052056       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 10:26:01.054245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-194354_d24bf3e4-4abb-4bb8-91b6-aa14e8b65f30!
	I1129 10:26:01.055925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55fa2c2d-ed1d-4d3e-8f29-7e39e322961c", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-194354_d24bf3e4-4abb-4bb8-91b6-aa14e8b65f30 became leader
	W1129 10:26:01.062409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:01.076953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 10:26:01.154734       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-194354_d24bf3e4-4abb-4bb8-91b6-aa14e8b65f30!
	W1129 10:26:03.080658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:03.090280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:05.093675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 10:26:05.099798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354: exit status 2 (382.435506ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-194354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.48s)
E1129 10:32:22.384228  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:22.390708  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:22.402145  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:22.423591  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:22.465107  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:22.546505  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:22.708104  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:23.029726  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:23.671765  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:24.953562  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:27.515660  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:32.637798  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:37.831254  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:37.837756  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:37.849152  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:37.870833  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:37.912227  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:37.993821  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:38.155092  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:38.476903  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:39.119099  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:40.401175  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:42.879780  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:42.963312  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:48.085297  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:58.326805  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:32:59.820866  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:33:03.361288  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/auto-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.5
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 4.84
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 171.82
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.86
48 TestAddons/StoppedEnableDisable 12.39
49 TestCertOptions 40.74
50 TestCertExpiration 336.23
52 TestForceSystemdFlag 36.08
53 TestForceSystemdEnv 38.08
58 TestErrorSpam/setup 36.81
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.11
61 TestErrorSpam/pause 6.19
62 TestErrorSpam/unpause 5.75
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 24.36
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
75 TestFunctional/serial/CacheCmd/cache/add_local 1.07
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 29.86
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.5
87 TestFunctional/serial/InvalidService 4.05
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 11.09
91 TestFunctional/parallel/DryRun 0.65
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.35
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 24.94
101 TestFunctional/parallel/SSHCmd 0.75
102 TestFunctional/parallel/CpCmd 2.44
104 TestFunctional/parallel/FileSync 0.38
105 TestFunctional/parallel/CertSync 2.23
109 TestFunctional/parallel/NodeLabels 0.13
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.41
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 7.84
130 TestFunctional/parallel/MountCmd/specific-port 2.15
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
132 TestFunctional/parallel/ServiceCmd/List 0.65
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.13
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.41
144 TestFunctional/parallel/ImageCommands/Setup 0.69
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 190.28
163 TestMultiControlPlane/serial/DeployApp 7.87
164 TestMultiControlPlane/serial/PingHostFromPods 1.47
165 TestMultiControlPlane/serial/AddWorkerNode 59.48
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.35
169 TestMultiControlPlane/serial/StopSecondaryNode 12.85
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 20.73
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 121.67
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.81
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
176 TestMultiControlPlane/serial/StopCluster 25.64
177 TestMultiControlPlane/serial/RestartCluster 84.93
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 67.86
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
185 TestJSONOutput/start/Command 80.41
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 39.75
211 TestKicCustomNetwork/use_default_bridge_network 35.4
212 TestKicExistingNetwork 36.59
213 TestKicCustomSubnet 37.72
214 TestKicStaticIP 37.4
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 69.87
219 TestMountStart/serial/StartWithMountFirst 6.28
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.33
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.95
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 138.57
231 TestMultiNode/serial/DeployApp2Nodes 4.64
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 58.41
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.38
237 TestMultiNode/serial/StopNode 2.39
238 TestMultiNode/serial/StartAfterStop 8.25
239 TestMultiNode/serial/RestartKeepsNodes 77.42
240 TestMultiNode/serial/DeleteNode 5.66
241 TestMultiNode/serial/StopMultiNode 23.99
242 TestMultiNode/serial/RestartMultiNode 54.21
243 TestMultiNode/serial/ValidateNameConflict 33.73
248 TestPreload 120.58
250 TestScheduledStopUnix 106.16
253 TestInsufficientStorage 13.22
254 TestRunningBinaryUpgrade 305.98
256 TestKubernetesUpgrade 335.55
257 TestMissingContainerUpgrade 104.92
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 48.61
261 TestNoKubernetes/serial/StartWithStopK8s 115.72
262 TestNoKubernetes/serial/Start 7.97
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
265 TestNoKubernetes/serial/ProfileList 32.14
266 TestNoKubernetes/serial/Stop 1.4
267 TestNoKubernetes/serial/StartNoArgs 7.16
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
269 TestStoppedBinaryUpgrade/Setup 0.81
270 TestStoppedBinaryUpgrade/Upgrade 305.85
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
280 TestPause/serial/Start 79.13
281 TestPause/serial/SecondStartNoReconfiguration 27.21
290 TestNetworkPlugins/group/false 3.91
295 TestStartStop/group/old-k8s-version/serial/FirstStart 61.67
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.5
298 TestStartStop/group/old-k8s-version/serial/Stop 12.01
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/old-k8s-version/serial/SecondStart 56.91
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/embed-certs/serial/FirstStart 81.22
307 TestStartStop/group/embed-certs/serial/DeployApp 9.3
309 TestStartStop/group/embed-certs/serial/Stop 12.08
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
311 TestStartStop/group/embed-certs/serial/SecondStart 57.36
313 TestStartStop/group/no-preload/serial/FirstStart 70.38
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
316 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/no-preload/serial/DeployApp 9.44
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.15
322 TestStartStop/group/no-preload/serial/Stop 12.11
323 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
324 TestStartStop/group/no-preload/serial/SecondStart 56.55
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
333 TestStartStop/group/newest-cni/serial/FirstStart 44.69
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.96
336 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/Stop 2.68
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
340 TestStartStop/group/newest-cni/serial/SecondStart 15.34
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
347 TestNetworkPlugins/group/auto/Start 84.58
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
350 TestNetworkPlugins/group/kindnet/Start 87.23
351 TestNetworkPlugins/group/auto/KubeletFlags 0.31
352 TestNetworkPlugins/group/auto/NetCatPod 10.29
353 TestNetworkPlugins/group/auto/DNS 0.16
354 TestNetworkPlugins/group/auto/Localhost 0.15
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestNetworkPlugins/group/kindnet/ControllerPod 6
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.46
359 TestNetworkPlugins/group/calico/Start 82.76
360 TestNetworkPlugins/group/kindnet/DNS 0.24
361 TestNetworkPlugins/group/kindnet/Localhost 0.17
362 TestNetworkPlugins/group/kindnet/HairPin 0.18
363 TestNetworkPlugins/group/custom-flannel/Start 68.43
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.31
366 TestNetworkPlugins/group/calico/NetCatPod 11.27
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.33
369 TestNetworkPlugins/group/calico/DNS 0.18
370 TestNetworkPlugins/group/calico/Localhost 0.14
371 TestNetworkPlugins/group/calico/HairPin 0.14
372 TestNetworkPlugins/group/custom-flannel/DNS 0.19
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/enable-default-cni/Start 94.67
376 TestNetworkPlugins/group/flannel/Start 61.2
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
379 TestNetworkPlugins/group/flannel/NetCatPod 10.28
380 TestNetworkPlugins/group/flannel/DNS 0.16
381 TestNetworkPlugins/group/flannel/Localhost 0.14
382 TestNetworkPlugins/group/flannel/HairPin 0.16
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.39
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
388 TestNetworkPlugins/group/bridge/Start 78.12
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.16
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (9.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-574220 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-574220 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.49473085s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1129 09:15:23.884739  302182 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1129 09:15:23.884820  302182 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-574220
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-574220: exit status 85 (94.599058ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-574220 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-574220 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:15:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:15:14.452192  302188 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:15:14.452425  302188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:14.452458  302188 out.go:374] Setting ErrFile to fd 2...
	I1129 09:15:14.452480  302188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:14.452749  302188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	W1129 09:15:14.452907  302188 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22000-300311/.minikube/config/config.json: open /home/jenkins/minikube-integration/22000-300311/.minikube/config/config.json: no such file or directory
	I1129 09:15:14.453348  302188 out.go:368] Setting JSON to true
	I1129 09:15:14.454299  302188 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7064,"bootTime":1764400651,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 09:15:14.454397  302188 start.go:143] virtualization:  
	I1129 09:15:14.460205  302188 out.go:99] [download-only-574220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1129 09:15:14.460425  302188 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball: no such file or directory
	I1129 09:15:14.460560  302188 notify.go:221] Checking for updates...
	I1129 09:15:14.464586  302188 out.go:171] MINIKUBE_LOCATION=22000
	I1129 09:15:14.468262  302188 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:15:14.471592  302188 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:15:14.474829  302188 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 09:15:14.477933  302188 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1129 09:15:14.484225  302188 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 09:15:14.484587  302188 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:15:14.518612  302188 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:15:14.518766  302188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:14.599148  302188 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-29 09:15:14.58861511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:14.599266  302188 docker.go:319] overlay module found
	I1129 09:15:14.602546  302188 out.go:99] Using the docker driver based on user configuration
	I1129 09:15:14.602590  302188 start.go:309] selected driver: docker
	I1129 09:15:14.602603  302188 start.go:927] validating driver "docker" against <nil>
	I1129 09:15:14.602714  302188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:14.681064  302188 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-29 09:15:14.672041123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:14.681210  302188 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:15:14.681460  302188 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1129 09:15:14.681601  302188 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 09:15:14.684830  302188 out.go:171] Using Docker driver with root privileges
	I1129 09:15:14.687903  302188 cni.go:84] Creating CNI manager for ""
	I1129 09:15:14.687981  302188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:15:14.687992  302188 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:15:14.688072  302188 start.go:353] cluster config:
	{Name:download-only-574220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-574220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:15:14.691151  302188 out.go:99] Starting "download-only-574220" primary control-plane node in "download-only-574220" cluster
	I1129 09:15:14.691177  302188 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:15:14.694099  302188 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:15:14.694153  302188 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 09:15:14.694299  302188 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:15:14.720806  302188 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 09:15:14.721006  302188 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 09:15:14.721097  302188 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 09:15:14.748281  302188 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1129 09:15:14.748306  302188 cache.go:65] Caching tarball of preloaded images
	I1129 09:15:14.748467  302188 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 09:15:14.751771  302188 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1129 09:15:14.751803  302188 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1129 09:15:14.838145  302188 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1129 09:15:14.838290  302188 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1129 09:15:19.331099  302188 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1129 09:15:19.331622  302188 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/download-only-574220/config.json ...
	I1129 09:15:19.331682  302188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/download-only-574220/config.json: {Name:mk1a65962a2d7cb228723140d01b31f615e12d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:15:19.331925  302188 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 09:15:19.332220  302188 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-574220 host does not exist
	  To start a cluster, run: "minikube start -p download-only-574220"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-574220
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-777977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-777977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.835737905s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1129 09:15:29.176860  302182 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1129 09:15:29.176895  302182 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-777977
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-777977: exit status 85 (89.245137ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-574220 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-574220 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p download-only-574220                                                                                                                                                   │ download-only-574220 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -o=json --download-only -p download-only-777977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-777977 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:15:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:15:24.384092  302385 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:15:24.384216  302385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:24.384254  302385 out.go:374] Setting ErrFile to fd 2...
	I1129 09:15:24.384269  302385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:24.384530  302385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:15:24.384933  302385 out.go:368] Setting JSON to true
	I1129 09:15:24.385732  302385 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7074,"bootTime":1764400651,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 09:15:24.385800  302385 start.go:143] virtualization:  
	I1129 09:15:24.389102  302385 out.go:99] [download-only-777977] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:15:24.389391  302385 notify.go:221] Checking for updates...
	I1129 09:15:24.392870  302385 out.go:171] MINIKUBE_LOCATION=22000
	I1129 09:15:24.396203  302385 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:15:24.399082  302385 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:15:24.401986  302385 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 09:15:24.404846  302385 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1129 09:15:24.410621  302385 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 09:15:24.410892  302385 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:15:24.439441  302385 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:15:24.439568  302385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:24.502344  302385 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-29 09:15:24.493111839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:24.502453  302385 docker.go:319] overlay module found
	I1129 09:15:24.505461  302385 out.go:99] Using the docker driver based on user configuration
	I1129 09:15:24.505505  302385 start.go:309] selected driver: docker
	I1129 09:15:24.505513  302385 start.go:927] validating driver "docker" against <nil>
	I1129 09:15:24.505630  302385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:24.560903  302385 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-29 09:15:24.551533781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:15:24.561061  302385 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:15:24.561356  302385 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1129 09:15:24.561514  302385 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 09:15:24.564734  302385 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-777977 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777977"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-777977
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1129 09:15:30.371034  302182 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-549171 --alsologtostderr --binary-mirror http://127.0.0.1:40279 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-549171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-549171
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-937561
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-937561: exit status 85 (69.07247ms)

                                                
                                                
-- stdout --
	* Profile "addons-937561" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-937561"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-937561
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-937561: exit status 85 (72.169018ms)

                                                
                                                
-- stdout --
	* Profile "addons-937561" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-937561"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (171.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-937561 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-937561 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m51.82189215s)
--- PASS: TestAddons/Setup (171.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-937561 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-937561 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-937561 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-937561 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d7157c2f-990a-4dba-877d-2f1f6dc08159] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d7157c2f-990a-4dba-877d-2f1f6dc08159] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003789665s
addons_test.go:694: (dbg) Run:  kubectl --context addons-937561 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-937561 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-937561 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-937561 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-937561
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-937561: (12.115480989s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-937561
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-937561
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-937561
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (40.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-033056 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.789787496s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-033056 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-033056 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-033056 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-033056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-033056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-033056: (2.176451455s)
--- PASS: TestCertOptions (40.74s)

                                                
                                    
x
+
TestCertExpiration (336.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-930117 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-930117 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.781807907s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-930117 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m50.834640056s)
helpers_test.go:175: Cleaning up "cert-expiration-930117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-930117
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-930117: (2.61248673s)
--- PASS: TestCertExpiration (336.23s)

                                                
                                    
x
+
TestForceSystemdFlag (36.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-345078 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1129 10:15:11.475304  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-345078 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.22958943s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-345078 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-345078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-345078
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-345078: (2.557304507s)
--- PASS: TestForceSystemdFlag (36.08s)

                                                
                                    
x
+
TestForceSystemdEnv (38.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-510051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-510051 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.535594551s)
helpers_test.go:175: Cleaning up "force-systemd-env-510051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-510051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-510051: (2.546571761s)
--- PASS: TestForceSystemdEnv (38.08s)

                                                
                                    
x
+
TestErrorSpam/setup (36.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-752953 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-752953 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-752953 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-752953 --driver=docker  --container-runtime=crio: (36.808059455s)
--- PASS: TestErrorSpam/setup (36.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (6.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause: exit status 80 (1.764627261s)

                                                
                                                
-- stdout --
	* Pausing node nospam-752953 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:22:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause: exit status 80 (2.092857223s)

                                                
                                                
-- stdout --
	* Pausing node nospam-752953 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:22:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause: exit status 80 (2.329984954s)

                                                
                                                
-- stdout --
	* Pausing node nospam-752953 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:22:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause: exit status 80 (1.795241066s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-752953 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:22:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause: exit status 80 (1.797382179s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-752953 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:22:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause: exit status 80 (2.157873507s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-752953 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:22:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.75s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 stop: (1.315323796s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-752953 --log_dir /tmp/nospam-752953 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22000-300311/.minikube/files/etc/test/nested/copy/302182/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-014829 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1129 09:23:24.014716  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:24.021256  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:24.032779  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:24.054263  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:24.095720  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:24.177133  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:24.338786  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:24.660522  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:25.302271  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:26.584437  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:29.145799  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:34.267700  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:23:44.509092  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-014829 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.999852395s)
--- PASS: TestFunctional/serial/StartWithProxy (79.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (24.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1129 09:24:00.776221  302182 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-014829 --alsologtostderr -v=8
E1129 09:24:04.990502  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-014829 --alsologtostderr -v=8: (24.35741874s)
functional_test.go:678: soft start took 24.35791981s for "functional-014829" cluster.
I1129 09:24:25.133912  302182 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (24.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-014829 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 cache add registry.k8s.io/pause:3.1: (1.194453011s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 cache add registry.k8s.io/pause:3.3: (1.133976522s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 cache add registry.k8s.io/pause:latest: (1.120130865s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-014829 /tmp/TestFunctionalserialCacheCmdcacheadd_local30566013/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cache add minikube-local-cache-test:functional-014829
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cache delete minikube-local-cache-test:functional-014829
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-014829
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.860311ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 kubectl -- --context functional-014829 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-014829 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-014829 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1129 09:24:45.953225  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-014829 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.859115443s)
functional_test.go:776: restart took 29.859211781s for "functional-014829" cluster.
I1129 09:25:02.424001  302182 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (29.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-014829 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 logs: (1.469488869s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 logs --file /tmp/TestFunctionalserialLogsFileCmd905074699/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 logs --file /tmp/TestFunctionalserialLogsFileCmd905074699/001/logs.txt: (1.495150459s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-014829 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-014829
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-014829: exit status 115 (381.324568ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31244 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-014829 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 config get cpus: exit status 14 (98.376068ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 config get cpus: exit status 14 (79.519591ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-014829 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-014829 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 328769: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-014829 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-014829 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (282.871623ms)

                                                
                                                
-- stdout --
	* [functional-014829] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:35:39.341932  328186 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:35:39.342055  328186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:35:39.342061  328186 out.go:374] Setting ErrFile to fd 2...
	I1129 09:35:39.342066  328186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:35:39.342501  328186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:35:39.342954  328186 out.go:368] Setting JSON to false
	I1129 09:35:39.343912  328186 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8289,"bootTime":1764400651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 09:35:39.344002  328186 start.go:143] virtualization:  
	I1129 09:35:39.350626  328186 out.go:179] * [functional-014829] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 09:35:39.353563  328186 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:35:39.353629  328186 notify.go:221] Checking for updates...
	I1129 09:35:39.363564  328186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:35:39.366477  328186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:35:39.369438  328186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 09:35:39.372340  328186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:35:39.376525  328186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:35:39.379931  328186 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:35:39.380565  328186 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:35:39.420442  328186 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:35:39.420576  328186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:35:39.519452  328186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:35:39.506924973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:35:39.519570  328186 docker.go:319] overlay module found
	I1129 09:35:39.522759  328186 out.go:179] * Using the docker driver based on existing profile
	I1129 09:35:39.525921  328186 start.go:309] selected driver: docker
	I1129 09:35:39.525943  328186 start.go:927] validating driver "docker" against &{Name:functional-014829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-014829 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:35:39.526032  328186 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:35:39.529961  328186 out.go:203] 
	W1129 09:35:39.535121  328186 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1129 09:35:39.537925  328186 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-014829 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-014829 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-014829 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (247.549697ms)

                                                
                                                
-- stdout --
	* [functional-014829] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:35:39.069986  328104 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:35:39.070186  328104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:35:39.070196  328104 out.go:374] Setting ErrFile to fd 2...
	I1129 09:35:39.070202  328104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:35:39.070591  328104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:35:39.070968  328104 out.go:368] Setting JSON to false
	I1129 09:35:39.071808  328104 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8288,"bootTime":1764400651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 09:35:39.071878  328104 start.go:143] virtualization:  
	I1129 09:35:39.076554  328104 out.go:179] * [functional-014829] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1129 09:35:39.079857  328104 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:35:39.079941  328104 notify.go:221] Checking for updates...
	I1129 09:35:39.085647  328104 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:35:39.088836  328104 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 09:35:39.093948  328104 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 09:35:39.102307  328104 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 09:35:39.105440  328104 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:35:39.108750  328104 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:35:39.109350  328104 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:35:39.151312  328104 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 09:35:39.151439  328104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:35:39.239378  328104 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 09:35:39.227860722 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:35:39.239786  328104 docker.go:319] overlay module found
	I1129 09:35:39.242957  328104 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1129 09:35:39.245795  328104 start.go:309] selected driver: docker
	I1129 09:35:39.245814  328104 start.go:927] validating driver "docker" against &{Name:functional-014829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-014829 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:35:39.245898  328104 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:35:39.249412  328104 out.go:203] 
	W1129 09:35:39.252263  328104 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1129 09:35:39.255126  328104 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [eab93c6c-acc6-4ab9-b500-3f1e5eeaa580] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003510537s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-014829 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-014829 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-014829 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-014829 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d4ef4cb5-561c-4127-8534-3dd1163239ca] Pending
helpers_test.go:352: "sp-pod" [d4ef4cb5-561c-4127-8534-3dd1163239ca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d4ef4cb5-561c-4127-8534-3dd1163239ca] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003372705s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-014829 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-014829 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-014829 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e23d7592-bf47-4b66-a46c-d5792a85413f] Pending
helpers_test.go:352: "sp-pod" [e23d7592-bf47-4b66-a46c-d5792a85413f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002807608s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-014829 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh -n functional-014829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cp functional-014829:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1727312286/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh -n functional-014829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh -n functional-014829 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/302182/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo cat /etc/test/nested/copy/302182/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/302182.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo cat /etc/ssl/certs/302182.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/302182.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo cat /usr/share/ca-certificates/302182.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3021822.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo cat /etc/ssl/certs/3021822.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3021822.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo cat /usr/share/ca-certificates/3021822.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-014829 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 ssh "sudo systemctl is-active docker": exit status 1 (351.992831ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 ssh "sudo systemctl is-active containerd": exit status 1 (375.335801ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-014829 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-014829 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-014829 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 324451: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-014829 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-014829 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-014829 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [75685369-8fdf-4a98-9790-b7e4647f956f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [75685369-8fdf-4a98-9790-b7e4647f956f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004324649s
I1129 09:25:20.880669  302182 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-014829 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.8.207 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-014829 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "366.891117ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.745644ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "378.064996ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.402343ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdany-port3487977658/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764408926239272496" to /tmp/TestFunctionalparallelMountCmdany-port3487977658/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764408926239272496" to /tmp/TestFunctionalparallelMountCmdany-port3487977658/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764408926239272496" to /tmp/TestFunctionalparallelMountCmdany-port3487977658/001/test-1764408926239272496
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.342028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 09:35:26.573875  302182 retry.go:31] will retry after 457.484431ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 29 09:35 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 29 09:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 29 09:35 test-1764408926239272496
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh cat /mount-9p/test-1764408926239272496
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-014829 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c2588048-a6a6-4f7e-9be0-806b123e7a21] Pending
helpers_test.go:352: "busybox-mount" [c2588048-a6a6-4f7e-9be0-806b123e7a21] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c2588048-a6a6-4f7e-9be0-806b123e7a21] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c2588048-a6a6-4f7e-9be0-806b123e7a21] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004248627s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-014829 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdany-port3487977658/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdspecific-port3244959343/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.934311ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 09:35:34.425516  302182 retry.go:31] will retry after 736.86456ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdspecific-port3244959343/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 ssh "sudo umount -f /mount-9p": exit status 1 (294.646233ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-014829 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdspecific-port3244959343/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3677853124/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3677853124/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3677853124/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-014829 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3677853124/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3677853124/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-014829 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3677853124/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 service list -o json
functional_test.go:1504: Took "589.95685ms" to run "out/minikube-linux-arm64 -p functional-014829 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 version -o=json --components: (1.134674348s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-014829 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-014829 image ls --format short --alsologtostderr:
I1129 09:35:54.342956  330627 out.go:360] Setting OutFile to fd 1 ...
I1129 09:35:54.343106  330627 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.343118  330627 out.go:374] Setting ErrFile to fd 2...
I1129 09:35:54.343129  330627 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.343521  330627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
I1129 09:35:54.346386  330627 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.346563  330627 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.347302  330627 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
I1129 09:35:54.381818  330627 ssh_runner.go:195] Run: systemctl --version
I1129 09:35:54.381882  330627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
I1129 09:35:54.405087  330627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
I1129 09:35:54.512955  330627 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-014829 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-014829 image ls --format table --alsologtostderr:
I1129 09:35:55.192019  330860 out.go:360] Setting OutFile to fd 1 ...
I1129 09:35:55.192258  330860 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:55.192302  330860 out.go:374] Setting ErrFile to fd 2...
I1129 09:35:55.192325  330860 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:55.192700  330860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
I1129 09:35:55.193583  330860 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:55.193810  330860 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:55.194727  330860 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
I1129 09:35:55.212408  330860 ssh_runner.go:195] Run: systemctl --version
I1129 09:35:55.212467  330860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
I1129 09:35:55.230662  330860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
I1129 09:35:55.336555  330860 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-014829 image ls --format json --alsologtostderr:
[{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"
repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","regis
try.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced6
87cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"siz
e":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d44
0c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-014829 image ls --format json --alsologtostderr:
I1129 09:35:54.915884  330798 out.go:360] Setting OutFile to fd 1 ...
I1129 09:35:54.916474  330798 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.916509  330798 out.go:374] Setting ErrFile to fd 2...
I1129 09:35:54.916530  330798 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.916839  330798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
I1129 09:35:54.917564  330798 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.917737  330798 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.918383  330798 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
I1129 09:35:54.944139  330798 ssh_runner.go:195] Run: systemctl --version
I1129 09:35:54.944204  330798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
I1129 09:35:54.971586  330798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
I1129 09:35:55.085753  330798 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-014829 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-014829 image ls --format yaml --alsologtostderr:
I1129 09:35:54.636134  330728 out.go:360] Setting OutFile to fd 1 ...
I1129 09:35:54.636324  330728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.636354  330728 out.go:374] Setting ErrFile to fd 2...
I1129 09:35:54.636379  330728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.636655  330728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
I1129 09:35:54.637323  330728 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.637500  330728 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.638053  330728 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
I1129 09:35:54.658578  330728 ssh_runner.go:195] Run: systemctl --version
I1129 09:35:54.658629  330728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
I1129 09:35:54.682233  330728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
I1129 09:35:54.793489  330728 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-014829 ssh pgrep buildkitd: exit status 1 (369.271672ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image build -t localhost/my-image:functional-014829 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-014829 image build -t localhost/my-image:functional-014829 testdata/build --alsologtostderr: (3.803075753s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-014829 image build -t localhost/my-image:functional-014829 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3510db7a780
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-014829
--> 4a60e8ad879
Successfully tagged localhost/my-image:functional-014829
4a60e8ad879990f616818769df307e22b0dda24de8ca6fab85ac8b99a687a895
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-014829 image build -t localhost/my-image:functional-014829 testdata/build --alsologtostderr:
I1129 09:35:54.738449  330754 out.go:360] Setting OutFile to fd 1 ...
I1129 09:35:54.740538  330754 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.740596  330754 out.go:374] Setting ErrFile to fd 2...
I1129 09:35:54.740619  330754 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 09:35:54.742193  330754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
I1129 09:35:54.742987  330754 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.743903  330754 config.go:182] Loaded profile config "functional-014829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 09:35:54.744540  330754 cli_runner.go:164] Run: docker container inspect functional-014829 --format={{.State.Status}}
I1129 09:35:54.762538  330754 ssh_runner.go:195] Run: systemctl --version
I1129 09:35:54.762591  330754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-014829
I1129 09:35:54.782511  330754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/functional-014829/id_rsa Username:docker}
I1129 09:35:54.896673  330754 build_images.go:162] Building image from path: /tmp/build.338574679.tar
I1129 09:35:54.896753  330754 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1129 09:35:54.905911  330754 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.338574679.tar
I1129 09:35:54.911128  330754 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.338574679.tar: stat -c "%s %y" /var/lib/minikube/build/build.338574679.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.338574679.tar': No such file or directory
I1129 09:35:54.911172  330754 ssh_runner.go:362] scp /tmp/build.338574679.tar --> /var/lib/minikube/build/build.338574679.tar (3072 bytes)
I1129 09:35:54.939378  330754 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.338574679
I1129 09:35:54.950800  330754 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.338574679 -xf /var/lib/minikube/build/build.338574679.tar
I1129 09:35:54.962968  330754 crio.go:315] Building image: /var/lib/minikube/build/build.338574679
I1129 09:35:54.963040  330754 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-014829 /var/lib/minikube/build/build.338574679 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1129 09:35:58.446987  330754 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-014829 /var/lib/minikube/build/build.338574679 --cgroup-manager=cgroupfs: (3.483917997s)
I1129 09:35:58.447066  330754 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.338574679
I1129 09:35:58.455456  330754 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.338574679.tar
I1129 09:35:58.463601  330754 build_images.go:218] Built localhost/my-image:functional-014829 from /tmp/build.338574679.tar
I1129 09:35:58.463635  330754 build_images.go:134] succeeded building to: functional-014829
I1129 09:35:58.463645  330754 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-014829
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image rm kicbase/echo-server:functional-014829 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-014829 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-014829
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-014829
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-014829
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1129 09:38:24.013500  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m9.364494505s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (190.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 kubectl -- rollout status deployment/busybox: (5.202718927s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-x87st -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-xs7h4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-ztlnp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-x87st -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-xs7h4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-ztlnp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-x87st -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-xs7h4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-ztlnp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-x87st -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-x87st -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-xs7h4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-xs7h4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-ztlnp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 kubectl -- exec busybox-7b57f96db7-ztlnp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 node add --alsologtostderr -v 5
E1129 09:39:47.078842  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:11.474789  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:11.481169  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:11.492593  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:11.513980  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:11.555403  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:11.636931  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:11.798592  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:12.120333  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:12.762541  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:14.044530  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:40:16.606244  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 node add --alsologtostderr -v 5: (58.387844105s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5: (1.094532978s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-925058 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1129 09:40:21.728621  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.091883362s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 status --output json --alsologtostderr -v 5: (1.037391091s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp testdata/cp-test.txt ha-925058:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3497802960/001/cp-test_ha-925058.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058:/home/docker/cp-test.txt ha-925058-m02:/home/docker/cp-test_ha-925058_ha-925058-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test_ha-925058_ha-925058-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058:/home/docker/cp-test.txt ha-925058-m03:/home/docker/cp-test_ha-925058_ha-925058-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test_ha-925058_ha-925058-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058:/home/docker/cp-test.txt ha-925058-m04:/home/docker/cp-test_ha-925058_ha-925058-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test_ha-925058_ha-925058-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp testdata/cp-test.txt ha-925058-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3497802960/001/cp-test_ha-925058-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m02:/home/docker/cp-test.txt ha-925058:/home/docker/cp-test_ha-925058-m02_ha-925058.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test_ha-925058-m02_ha-925058.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m02:/home/docker/cp-test.txt ha-925058-m03:/home/docker/cp-test_ha-925058-m02_ha-925058-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test_ha-925058-m02_ha-925058-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m02:/home/docker/cp-test.txt ha-925058-m04:/home/docker/cp-test_ha-925058-m02_ha-925058-m04.txt
E1129 09:40:31.970357  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test_ha-925058-m02_ha-925058-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp testdata/cp-test.txt ha-925058-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3497802960/001/cp-test_ha-925058-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m03:/home/docker/cp-test.txt ha-925058:/home/docker/cp-test_ha-925058-m03_ha-925058.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test_ha-925058-m03_ha-925058.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m03:/home/docker/cp-test.txt ha-925058-m02:/home/docker/cp-test_ha-925058-m03_ha-925058-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test_ha-925058-m03_ha-925058-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m03:/home/docker/cp-test.txt ha-925058-m04:/home/docker/cp-test_ha-925058-m03_ha-925058-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test_ha-925058-m03_ha-925058-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp testdata/cp-test.txt ha-925058-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3497802960/001/cp-test_ha-925058-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m04:/home/docker/cp-test.txt ha-925058:/home/docker/cp-test_ha-925058-m04_ha-925058.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058 "sudo cat /home/docker/cp-test_ha-925058-m04_ha-925058.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m04:/home/docker/cp-test.txt ha-925058-m02:/home/docker/cp-test_ha-925058-m04_ha-925058-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m02 "sudo cat /home/docker/cp-test_ha-925058-m04_ha-925058-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 cp ha-925058-m04:/home/docker/cp-test.txt ha-925058-m03:/home/docker/cp-test_ha-925058-m04_ha-925058-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 ssh -n ha-925058-m03 "sudo cat /home/docker/cp-test_ha-925058-m04_ha-925058-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 node stop m02 --alsologtostderr -v 5
E1129 09:40:52.452118  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 node stop m02 --alsologtostderr -v 5: (12.040152589s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5: exit status 7 (809.067967ms)

                                                
                                                
-- stdout --
	ha-925058
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925058-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925058-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925058-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:40:54.318678  345573 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:40:54.318794  345573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:40:54.318806  345573 out.go:374] Setting ErrFile to fd 2...
	I1129 09:40:54.318811  345573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:40:54.319067  345573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:40:54.319252  345573 out.go:368] Setting JSON to false
	I1129 09:40:54.319294  345573 mustload.go:66] Loading cluster: ha-925058
	I1129 09:40:54.319369  345573 notify.go:221] Checking for updates...
	I1129 09:40:54.320896  345573 config.go:182] Loaded profile config "ha-925058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:40:54.320919  345573 status.go:174] checking status of ha-925058 ...
	I1129 09:40:54.321621  345573 cli_runner.go:164] Run: docker container inspect ha-925058 --format={{.State.Status}}
	I1129 09:40:54.345385  345573 status.go:371] ha-925058 host status = "Running" (err=<nil>)
	I1129 09:40:54.345407  345573 host.go:66] Checking if "ha-925058" exists ...
	I1129 09:40:54.345712  345573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-925058
	I1129 09:40:54.386453  345573 host.go:66] Checking if "ha-925058" exists ...
	I1129 09:40:54.386781  345573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:40:54.386830  345573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-925058
	I1129 09:40:54.405787  345573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/ha-925058/id_rsa Username:docker}
	I1129 09:40:54.515638  345573 ssh_runner.go:195] Run: systemctl --version
	I1129 09:40:54.523159  345573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:40:54.538267  345573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:40:54.595485  345573 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-29 09:40:54.585702619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:40:54.596066  345573 kubeconfig.go:125] found "ha-925058" server: "https://192.168.49.254:8443"
	I1129 09:40:54.596101  345573 api_server.go:166] Checking apiserver status ...
	I1129 09:40:54.596147  345573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:40:54.608279  345573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	I1129 09:40:54.618000  345573 api_server.go:182] apiserver freezer: "13:freezer:/docker/8fdf8ab54ca4b84c88bce7cb37878be9322c98717c30c07d5351f37730dbc1b0/crio/crio-5b28c2944407ddb0cf8e42ee2db317a843266d6b19bc3d543a23e6b684b7b0df"
	I1129 09:40:54.618065  345573 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8fdf8ab54ca4b84c88bce7cb37878be9322c98717c30c07d5351f37730dbc1b0/crio/crio-5b28c2944407ddb0cf8e42ee2db317a843266d6b19bc3d543a23e6b684b7b0df/freezer.state
	I1129 09:40:54.626582  345573 api_server.go:204] freezer state: "THAWED"
	I1129 09:40:54.626608  345573 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 09:40:54.634927  345573 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 09:40:54.634975  345573 status.go:463] ha-925058 apiserver status = Running (err=<nil>)
	I1129 09:40:54.635003  345573 status.go:176] ha-925058 status: &{Name:ha-925058 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:40:54.635026  345573 status.go:174] checking status of ha-925058-m02 ...
	I1129 09:40:54.635374  345573 cli_runner.go:164] Run: docker container inspect ha-925058-m02 --format={{.State.Status}}
	I1129 09:40:54.653540  345573 status.go:371] ha-925058-m02 host status = "Stopped" (err=<nil>)
	I1129 09:40:54.653568  345573 status.go:384] host is not running, skipping remaining checks
	I1129 09:40:54.653576  345573 status.go:176] ha-925058-m02 status: &{Name:ha-925058-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:40:54.653596  345573 status.go:174] checking status of ha-925058-m03 ...
	I1129 09:40:54.653943  345573 cli_runner.go:164] Run: docker container inspect ha-925058-m03 --format={{.State.Status}}
	I1129 09:40:54.672425  345573 status.go:371] ha-925058-m03 host status = "Running" (err=<nil>)
	I1129 09:40:54.672450  345573 host.go:66] Checking if "ha-925058-m03" exists ...
	I1129 09:40:54.672754  345573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-925058-m03
	I1129 09:40:54.691770  345573 host.go:66] Checking if "ha-925058-m03" exists ...
	I1129 09:40:54.692104  345573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:40:54.692149  345573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-925058-m03
	I1129 09:40:54.709551  345573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/ha-925058-m03/id_rsa Username:docker}
	I1129 09:40:54.815796  345573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:40:54.830785  345573 kubeconfig.go:125] found "ha-925058" server: "https://192.168.49.254:8443"
	I1129 09:40:54.830814  345573 api_server.go:166] Checking apiserver status ...
	I1129 09:40:54.830862  345573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:40:54.844416  345573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	I1129 09:40:54.853116  345573 api_server.go:182] apiserver freezer: "13:freezer:/docker/fe53a99b26b0a226756828e18e076e0a1283fc9f351f48f1415a212fd6ecc846/crio/crio-4f661777a93ec6c268d0f46779c73fe1a442d8a7fab2e045ce2a43aae67ed924"
	I1129 09:40:54.853208  345573 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fe53a99b26b0a226756828e18e076e0a1283fc9f351f48f1415a212fd6ecc846/crio/crio-4f661777a93ec6c268d0f46779c73fe1a442d8a7fab2e045ce2a43aae67ed924/freezer.state
	I1129 09:40:54.861023  345573 api_server.go:204] freezer state: "THAWED"
	I1129 09:40:54.861055  345573 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 09:40:54.869319  345573 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 09:40:54.869356  345573 status.go:463] ha-925058-m03 apiserver status = Running (err=<nil>)
	I1129 09:40:54.869382  345573 status.go:176] ha-925058-m03 status: &{Name:ha-925058-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:40:54.869406  345573 status.go:174] checking status of ha-925058-m04 ...
	I1129 09:40:54.869740  345573 cli_runner.go:164] Run: docker container inspect ha-925058-m04 --format={{.State.Status}}
	I1129 09:40:54.887682  345573 status.go:371] ha-925058-m04 host status = "Running" (err=<nil>)
	I1129 09:40:54.887713  345573 host.go:66] Checking if "ha-925058-m04" exists ...
	I1129 09:40:54.888048  345573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-925058-m04
	I1129 09:40:54.908104  345573 host.go:66] Checking if "ha-925058-m04" exists ...
	I1129 09:40:54.908412  345573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:40:54.908487  345573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-925058-m04
	I1129 09:40:54.926990  345573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/ha-925058-m04/id_rsa Username:docker}
	I1129 09:40:55.036758  345573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:40:55.066288  345573 status.go:176] ha-925058-m04 status: &{Name:ha-925058-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 node start m02 --alsologtostderr -v 5: (19.414272305s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5: (1.211382164s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.027506459s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 stop --alsologtostderr -v 5
E1129 09:41:33.414282  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 stop --alsologtostderr -v 5: (31.320596973s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 start --wait true --alsologtostderr -v 5
E1129 09:42:55.336119  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 start --wait true --alsologtostderr -v 5: (1m30.187932754s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 node delete m03 --alsologtostderr -v 5
E1129 09:43:24.013198  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 node delete m03 --alsologtostderr -v 5: (10.871941656s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 stop --alsologtostderr -v 5: (25.518694831s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5: exit status 7 (124.408241ms)

                                                
                                                
-- stdout --
	ha-925058
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925058-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925058-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:43:57.520297  357615 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:43:57.520457  357615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:43:57.520467  357615 out.go:374] Setting ErrFile to fd 2...
	I1129 09:43:57.520474  357615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:43:57.520744  357615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:43:57.520931  357615 out.go:368] Setting JSON to false
	I1129 09:43:57.520968  357615 mustload.go:66] Loading cluster: ha-925058
	I1129 09:43:57.521071  357615 notify.go:221] Checking for updates...
	I1129 09:43:57.521395  357615 config.go:182] Loaded profile config "ha-925058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:43:57.521449  357615 status.go:174] checking status of ha-925058 ...
	I1129 09:43:57.522052  357615 cli_runner.go:164] Run: docker container inspect ha-925058 --format={{.State.Status}}
	I1129 09:43:57.542321  357615 status.go:371] ha-925058 host status = "Stopped" (err=<nil>)
	I1129 09:43:57.542343  357615 status.go:384] host is not running, skipping remaining checks
	I1129 09:43:57.542349  357615 status.go:176] ha-925058 status: &{Name:ha-925058 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:43:57.542380  357615 status.go:174] checking status of ha-925058-m02 ...
	I1129 09:43:57.542726  357615 cli_runner.go:164] Run: docker container inspect ha-925058-m02 --format={{.State.Status}}
	I1129 09:43:57.572294  357615 status.go:371] ha-925058-m02 host status = "Stopped" (err=<nil>)
	I1129 09:43:57.572317  357615 status.go:384] host is not running, skipping remaining checks
	I1129 09:43:57.572325  357615 status.go:176] ha-925058-m02 status: &{Name:ha-925058-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:43:57.572344  357615 status.go:174] checking status of ha-925058-m04 ...
	I1129 09:43:57.572643  357615 cli_runner.go:164] Run: docker container inspect ha-925058-m04 --format={{.State.Status}}
	I1129 09:43:57.590108  357615 status.go:371] ha-925058-m04 host status = "Stopped" (err=<nil>)
	I1129 09:43:57.590138  357615 status.go:384] host is not running, skipping remaining checks
	I1129 09:43:57.590145  357615 status.go:176] ha-925058-m04 status: &{Name:ha-925058-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1129 09:45:11.475368  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m23.941634697s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 node add --control-plane --alsologtostderr -v 5
E1129 09:45:39.178208  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 node add --control-plane --alsologtostderr -v 5: (1m6.777474962s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-925058 status --alsologtostderr -v 5: (1.08447628s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.050751246s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-434377 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-434377 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.398430132s)
--- PASS: TestJSONOutput/start/Command (80.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-434377 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-434377 --output=json --user=testUser: (5.831686172s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-541972 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-541972 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.734615ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"83bb042c-1ea0-4844-a514-6947a2845448","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-541972] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1307449-e02d-455d-bad4-240267f0e140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"825273a2-4e1a-43dd-9114-befcf034aaff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"744053d5-d0b6-467b-ab8f-3cb8062684b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig"}}
	{"specversion":"1.0","id":"9b91a88f-58cd-4998-9874-50a26d627437","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube"}}
	{"specversion":"1.0","id":"ac051622-cd19-4526-a3f4-2b050c89b320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"012e3cff-dd2d-47b6-a727-2e567724708f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5d7ea747-8b35-48a4-a18a-7286252429d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-541972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-541972
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-306252 --network=
E1129 09:48:24.013865  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-306252 --network=: (37.516711496s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-306252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-306252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-306252: (2.209759465s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-023338 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-023338 --network=bridge: (33.321303844s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-023338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-023338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-023338: (2.057213089s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.40s)

                                                
                                    
x
+
TestKicExistingNetwork (36.59s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1129 09:49:29.815422  302182 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1129 09:49:29.831350  302182 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1129 09:49:29.832255  302182 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1129 09:49:29.832297  302182 cli_runner.go:164] Run: docker network inspect existing-network
W1129 09:49:29.851202  302182 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1129 09:49:29.851233  302182 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1129 09:49:29.851284  302182 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1129 09:49:29.851386  302182 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1129 09:49:29.868720  302182 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e926c45953c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:b7:db:16:55:ea} reservation:<nil>}
I1129 09:49:29.869071  302182 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001619de0}
I1129 09:49:29.869097  302182 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1129 09:49:29.869152  302182 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1129 09:49:29.931304  302182 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-336243 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-336243 --network=existing-network: (34.3365127s)
helpers_test.go:175: Cleaning up "existing-network-336243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-336243
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-336243: (2.100370522s)
I1129 09:50:06.384342  302182 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.59s)

                                                
                                    
x
+
TestKicCustomSubnet (37.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-990182 --subnet=192.168.60.0/24
E1129 09:50:11.478280  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-990182 --subnet=192.168.60.0/24: (35.461652357s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-990182 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-990182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-990182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-990182: (2.229174781s)
--- PASS: TestKicCustomSubnet (37.72s)

                                                
                                    
x
+
TestKicStaticIP (37.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-248165 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-248165 --static-ip=192.168.200.200: (35.061572279s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-248165 ip
helpers_test.go:175: Cleaning up "static-ip-248165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-248165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-248165: (2.17437278s)
--- PASS: TestKicStaticIP (37.40s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-139414 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-139414 --driver=docker  --container-runtime=crio: (32.180741408s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-142311 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-142311 --driver=docker  --container-runtime=crio: (32.121323756s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-139414
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-142311
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-142311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-142311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-142311: (2.097726288s)
helpers_test.go:175: Cleaning up "first-139414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-139414
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-139414: (2.06040924s)
--- PASS: TestMinikubeProfile (69.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-395577 --memory=3072 --mount-string /tmp/TestMountStartserial2611813135/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-395577 --memory=3072 --mount-string /tmp/TestMountStartserial2611813135/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.27858201s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-395577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-397514 --memory=3072 --mount-string /tmp/TestMountStartserial2611813135/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-397514 --memory=3072 --mount-string /tmp/TestMountStartserial2611813135/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.331499116s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-397514 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-395577 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-395577 --alsologtostderr -v=5: (1.703914429s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-397514 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-397514
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-397514: (1.291468341s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-397514
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-397514: (6.946541195s)
--- PASS: TestMountStart/serial/RestartStopped (7.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-397514 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-705418 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1129 09:53:24.013523  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:55:11.475020  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-705418 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.048169443s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-705418 -- rollout status deployment/busybox: (2.889413574s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-8vns7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-bhtjp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-8vns7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-bhtjp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-8vns7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-bhtjp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.64s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-8vns7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-8vns7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-bhtjp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-705418 -- exec busybox-7b57f96db7-bhtjp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-705418 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-705418 -v=5 --alsologtostderr: (57.704227053s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-705418 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp testdata/cp-test.txt multinode-705418:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3557404772/001/cp-test_multinode-705418.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418:/home/docker/cp-test.txt multinode-705418-m02:/home/docker/cp-test_multinode-705418_multinode-705418-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m02 "sudo cat /home/docker/cp-test_multinode-705418_multinode-705418-m02.txt"
E1129 09:56:27.080856  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418:/home/docker/cp-test.txt multinode-705418-m03:/home/docker/cp-test_multinode-705418_multinode-705418-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m03 "sudo cat /home/docker/cp-test_multinode-705418_multinode-705418-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp testdata/cp-test.txt multinode-705418-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3557404772/001/cp-test_multinode-705418-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418-m02:/home/docker/cp-test.txt multinode-705418:/home/docker/cp-test_multinode-705418-m02_multinode-705418.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418 "sudo cat /home/docker/cp-test_multinode-705418-m02_multinode-705418.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418-m02:/home/docker/cp-test.txt multinode-705418-m03:/home/docker/cp-test_multinode-705418-m02_multinode-705418-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m03 "sudo cat /home/docker/cp-test_multinode-705418-m02_multinode-705418-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp testdata/cp-test.txt multinode-705418-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3557404772/001/cp-test_multinode-705418-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418-m03:/home/docker/cp-test.txt multinode-705418:/home/docker/cp-test_multinode-705418-m03_multinode-705418.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418 "sudo cat /home/docker/cp-test_multinode-705418-m03_multinode-705418.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 cp multinode-705418-m03:/home/docker/cp-test.txt multinode-705418-m02:/home/docker/cp-test_multinode-705418-m03_multinode-705418-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 ssh -n multinode-705418-m02 "sudo cat /home/docker/cp-test_multinode-705418-m03_multinode-705418-m02.txt"
E1129 09:56:34.540184  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/CopyFile (10.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-705418 node stop m03: (1.305244967s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-705418 status: exit status 7 (549.482985ms)

                                                
                                                
-- stdout --
	multinode-705418
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-705418-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-705418-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr: exit status 7 (536.530577ms)

                                                
                                                
-- stdout --
	multinode-705418
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-705418-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-705418-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:56:36.585071  407951 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:56:36.585227  407951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:56:36.585238  407951 out.go:374] Setting ErrFile to fd 2...
	I1129 09:56:36.585243  407951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:56:36.585475  407951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:56:36.585654  407951 out.go:368] Setting JSON to false
	I1129 09:56:36.585682  407951 mustload.go:66] Loading cluster: multinode-705418
	I1129 09:56:36.585772  407951 notify.go:221] Checking for updates...
	I1129 09:56:36.586104  407951 config.go:182] Loaded profile config "multinode-705418": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:56:36.586122  407951 status.go:174] checking status of multinode-705418 ...
	I1129 09:56:36.586781  407951 cli_runner.go:164] Run: docker container inspect multinode-705418 --format={{.State.Status}}
	I1129 09:56:36.605508  407951 status.go:371] multinode-705418 host status = "Running" (err=<nil>)
	I1129 09:56:36.605529  407951 host.go:66] Checking if "multinode-705418" exists ...
	I1129 09:56:36.605809  407951 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-705418
	I1129 09:56:36.631967  407951 host.go:66] Checking if "multinode-705418" exists ...
	I1129 09:56:36.632292  407951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:56:36.632424  407951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-705418
	I1129 09:56:36.656421  407951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/multinode-705418/id_rsa Username:docker}
	I1129 09:56:36.759192  407951 ssh_runner.go:195] Run: systemctl --version
	I1129 09:56:36.765373  407951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:56:36.778333  407951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:56:36.833895  407951 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-29 09:56:36.823904766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 09:56:36.834624  407951 kubeconfig.go:125] found "multinode-705418" server: "https://192.168.67.2:8443"
	I1129 09:56:36.834673  407951 api_server.go:166] Checking apiserver status ...
	I1129 09:56:36.834724  407951 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:56:36.845926  407951 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1218/cgroup
	I1129 09:56:36.854340  407951 api_server.go:182] apiserver freezer: "13:freezer:/docker/7623635160ede0498d9f2682fde55414be2132c89a19580d1a114cc9cbe01fb5/crio/crio-c52bb478a08c7426ffb64ab8190a349306568af9e261b5405cd086305815c061"
	I1129 09:56:36.854411  407951 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7623635160ede0498d9f2682fde55414be2132c89a19580d1a114cc9cbe01fb5/crio/crio-c52bb478a08c7426ffb64ab8190a349306568af9e261b5405cd086305815c061/freezer.state
	I1129 09:56:36.862061  407951 api_server.go:204] freezer state: "THAWED"
	I1129 09:56:36.862115  407951 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1129 09:56:36.870508  407951 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1129 09:56:36.870538  407951 status.go:463] multinode-705418 apiserver status = Running (err=<nil>)
	I1129 09:56:36.870549  407951 status.go:176] multinode-705418 status: &{Name:multinode-705418 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:56:36.870597  407951 status.go:174] checking status of multinode-705418-m02 ...
	I1129 09:56:36.870936  407951 cli_runner.go:164] Run: docker container inspect multinode-705418-m02 --format={{.State.Status}}
	I1129 09:56:36.889458  407951 status.go:371] multinode-705418-m02 host status = "Running" (err=<nil>)
	I1129 09:56:36.889482  407951 host.go:66] Checking if "multinode-705418-m02" exists ...
	I1129 09:56:36.889867  407951 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-705418-m02
	I1129 09:56:36.907731  407951 host.go:66] Checking if "multinode-705418-m02" exists ...
	I1129 09:56:36.908144  407951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:56:36.908213  407951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-705418-m02
	I1129 09:56:36.926018  407951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/22000-300311/.minikube/machines/multinode-705418-m02/id_rsa Username:docker}
	I1129 09:56:37.030306  407951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:56:37.043670  407951 status.go:176] multinode-705418-m02 status: &{Name:multinode-705418-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:56:37.043717  407951 status.go:174] checking status of multinode-705418-m03 ...
	I1129 09:56:37.044059  407951 cli_runner.go:164] Run: docker container inspect multinode-705418-m03 --format={{.State.Status}}
	I1129 09:56:37.060972  407951 status.go:371] multinode-705418-m03 host status = "Stopped" (err=<nil>)
	I1129 09:56:37.060999  407951 status.go:384] host is not running, skipping remaining checks
	I1129 09:56:37.061007  407951 status.go:176] multinode-705418-m03 status: &{Name:multinode-705418-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-705418 node start m03 -v=5 --alsologtostderr: (7.302104928s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-705418
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-705418
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-705418: (25.26828439s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-705418 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-705418 --wait=true -v=5 --alsologtostderr: (52.021482031s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-705418
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-705418 node delete m03: (4.95565437s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 stop
E1129 09:58:24.012483  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-705418 stop: (23.808245087s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-705418 status: exit status 7 (92.818913ms)

                                                
                                                
-- stdout --
	multinode-705418
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-705418-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr: exit status 7 (91.641299ms)

                                                
                                                
-- stdout --
	multinode-705418
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-705418-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:58:32.351323  415765 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:58:32.351514  415765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:58:32.351541  415765 out.go:374] Setting ErrFile to fd 2...
	I1129 09:58:32.351562  415765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:58:32.351843  415765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 09:58:32.352054  415765 out.go:368] Setting JSON to false
	I1129 09:58:32.352112  415765 mustload.go:66] Loading cluster: multinode-705418
	I1129 09:58:32.352187  415765 notify.go:221] Checking for updates...
	I1129 09:58:32.353436  415765 config.go:182] Loaded profile config "multinode-705418": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:58:32.353480  415765 status.go:174] checking status of multinode-705418 ...
	I1129 09:58:32.354025  415765 cli_runner.go:164] Run: docker container inspect multinode-705418 --format={{.State.Status}}
	I1129 09:58:32.370872  415765 status.go:371] multinode-705418 host status = "Stopped" (err=<nil>)
	I1129 09:58:32.370895  415765 status.go:384] host is not running, skipping remaining checks
	I1129 09:58:32.370902  415765 status.go:176] multinode-705418 status: &{Name:multinode-705418 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:58:32.370934  415765 status.go:174] checking status of multinode-705418-m02 ...
	I1129 09:58:32.371247  415765 cli_runner.go:164] Run: docker container inspect multinode-705418-m02 --format={{.State.Status}}
	I1129 09:58:32.395620  415765 status.go:371] multinode-705418-m02 host status = "Stopped" (err=<nil>)
	I1129 09:58:32.395645  415765 status.go:384] host is not running, skipping remaining checks
	I1129 09:58:32.395652  415765 status.go:176] multinode-705418-m02 status: &{Name:multinode-705418-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-705418 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-705418 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.506861654s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-705418 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-705418
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-705418-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-705418-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.027525ms)

                                                
                                                
-- stdout --
	* [multinode-705418-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-705418-m02' is duplicated with machine name 'multinode-705418-m02' in profile 'multinode-705418'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-705418-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-705418-m03 --driver=docker  --container-runtime=crio: (30.915868006s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-705418
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-705418: exit status 80 (338.280114ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-705418 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-705418-m03 already exists in multinode-705418-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-705418-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-705418-m03: (2.324323785s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.73s)

                                                
                                    
x
+
TestPreload (120.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-937842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1129 10:00:11.475450  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-937842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m2.811680176s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-937842 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-937842 image pull gcr.io/k8s-minikube/busybox: (2.458286881s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-937842
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-937842: (5.87163376s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-937842 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-937842 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.735101344s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-937842 image list
helpers_test.go:175: Cleaning up "test-preload-937842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-937842
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-937842: (2.467899777s)
--- PASS: TestPreload (120.58s)

                                                
                                    
x
+
TestScheduledStopUnix (106.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-547198 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-547198 --memory=3072 --driver=docker  --container-runtime=crio: (30.666091916s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-547198 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 10:02:37.094360  429757 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:02:37.094561  429757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:02:37.094593  429757 out.go:374] Setting ErrFile to fd 2...
	I1129 10:02:37.094615  429757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:02:37.094912  429757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:02:37.095202  429757 out.go:368] Setting JSON to false
	I1129 10:02:37.095358  429757 mustload.go:66] Loading cluster: scheduled-stop-547198
	I1129 10:02:37.095753  429757 config.go:182] Loaded profile config "scheduled-stop-547198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:02:37.095863  429757 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/config.json ...
	I1129 10:02:37.096069  429757 mustload.go:66] Loading cluster: scheduled-stop-547198
	I1129 10:02:37.096219  429757 config.go:182] Loaded profile config "scheduled-stop-547198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-547198 -n scheduled-stop-547198
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-547198 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 10:02:37.565610  429846 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:02:37.565844  429846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:02:37.565876  429846 out.go:374] Setting ErrFile to fd 2...
	I1129 10:02:37.565898  429846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:02:37.566211  429846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:02:37.566488  429846 out.go:368] Setting JSON to false
	I1129 10:02:37.567724  429846 daemonize_unix.go:73] killing process 429774 as it is an old scheduled stop
	I1129 10:02:37.567921  429846 mustload.go:66] Loading cluster: scheduled-stop-547198
	I1129 10:02:37.568342  429846 config.go:182] Loaded profile config "scheduled-stop-547198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:02:37.568448  429846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/config.json ...
	I1129 10:02:37.568671  429846 mustload.go:66] Loading cluster: scheduled-stop-547198
	I1129 10:02:37.568841  429846 config.go:182] Loaded profile config "scheduled-stop-547198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 429774 is a zombie
I1129 10:02:37.574284  302182 retry.go:31] will retry after 111.332µs: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.575384  302182 retry.go:31] will retry after 125.016µs: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.576513  302182 retry.go:31] will retry after 213.189µs: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.577626  302182 retry.go:31] will retry after 182.447µs: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.578767  302182 retry.go:31] will retry after 480.872µs: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.579835  302182 retry.go:31] will retry after 684.923µs: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.580954  302182 retry.go:31] will retry after 1.578135ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.583127  302182 retry.go:31] will retry after 1.654457ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.585316  302182 retry.go:31] will retry after 2.852555ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.589652  302182 retry.go:31] will retry after 2.642509ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.592880  302182 retry.go:31] will retry after 6.253189ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.600121  302182 retry.go:31] will retry after 4.85583ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.605366  302182 retry.go:31] will retry after 13.303579ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.619609  302182 retry.go:31] will retry after 26.949697ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.646821  302182 retry.go:31] will retry after 24.673036ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
I1129 10:02:37.672071  302182 retry.go:31] will retry after 23.290345ms: open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-547198 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-547198 -n scheduled-stop-547198
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-547198
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-547198 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 10:03:03.487235  430213 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:03:03.487427  430213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:03:03.487459  430213 out.go:374] Setting ErrFile to fd 2...
	I1129 10:03:03.487481  430213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:03:03.487773  430213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:03:03.488058  430213 out.go:368] Setting JSON to false
	I1129 10:03:03.488203  430213 mustload.go:66] Loading cluster: scheduled-stop-547198
	I1129 10:03:03.488601  430213 config.go:182] Loaded profile config "scheduled-stop-547198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 10:03:03.488714  430213 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/scheduled-stop-547198/config.json ...
	I1129 10:03:03.488922  430213 mustload.go:66] Loading cluster: scheduled-stop-547198
	I1129 10:03:03.489080  430213 config.go:182] Loaded profile config "scheduled-stop-547198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1129 10:03:24.012443  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-547198
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-547198: exit status 7 (68.860291ms)

                                                
                                                
-- stdout --
	scheduled-stop-547198
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-547198 -n scheduled-stop-547198
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-547198 -n scheduled-stop-547198: exit status 7 (71.470129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-547198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-547198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-547198: (3.8999854s)
--- PASS: TestScheduledStopUnix (106.16s)

                                                
                                    
x
+
TestInsufficientStorage (13.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-237416 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-237416 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.600848525s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a5f792f-a4a3-4546-b71c-f64dcb81fda5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-237416] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d2ed28c-f1f7-4a30-9134-338882a55fe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"f088bbff-7f67-45f3-92a8-cf9dee224812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3ac1a520-9eaf-4875-ac3e-c595fba5205b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig"}}
	{"specversion":"1.0","id":"8488f1fc-a04b-4c89-abf5-a96f90cf99f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube"}}
	{"specversion":"1.0","id":"8464e4b9-82b7-49d0-a6d2-6e84ab673eda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c687fe28-a14e-4655-984d-d62e5fe9da8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd2062a9-2fbb-490d-9859-663907b1cc42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"82c691cf-e1f9-4fbe-a82f-ba43affd4443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"06e9c9d9-ae80-4eeb-bd0d-cddb3c9ccf0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3a08bfb-57a9-4d51-b1e1-b90347a28ca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ea418975-1948-4e9f-a567-34f2d8578fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-237416\" primary control-plane node in \"insufficient-storage-237416\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d217909c-f21e-4016-923e-73b3e7d672e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2c1ee2e-8516-4b23-a251-d3c6f5433b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e9734f3-96d0-4fd1-8fc3-21aadb31411b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-237416 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-237416 --output=json --layout=cluster: exit status 7 (316.828957ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-237416","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-237416","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 10:04:03.443020  431923 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-237416" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-237416 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-237416 --output=json --layout=cluster: exit status 7 (298.088326ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-237416","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-237416","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 10:04:03.742016  431990 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-237416" does not appear in /home/jenkins/minikube-integration/22000-300311/kubeconfig
	E1129 10:04:03.751786  431990 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/insufficient-storage-237416/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-237416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-237416
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-237416: (2.006315005s)
--- PASS: TestInsufficientStorage (13.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (305.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2513976958 start -p running-upgrade-493711 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2513976958 start -p running-upgrade-493711 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.149710056s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-493711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-493711 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.575192646s)
helpers_test.go:175: Cleaning up "running-upgrade-493711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-493711
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-493711: (2.105892274s)
--- PASS: TestRunningBinaryUpgrade (305.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (335.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.559520274s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-510809
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-510809: (1.388816318s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-510809 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-510809 status --format={{.Host}}: exit status 7 (77.81086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.6595773s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-510809 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (95.941854ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-510809] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-510809
	    minikube start -p kubernetes-upgrade-510809 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5108092 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-510809 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-510809 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.321852195s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-510809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-510809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-510809: (2.335423721s)
--- PASS: TestKubernetesUpgrade (335.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (104.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.636986640 start -p missing-upgrade-246693 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.636986640 start -p missing-upgrade-246693 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.264944997s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-246693
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-246693
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-246693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1129 10:05:11.475339  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-246693 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.860171798s)
helpers_test.go:175: Cleaning up "missing-upgrade-246693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-246693
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-246693: (2.033943781s)
--- PASS: TestMissingContainerUpgrade (104.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-399835 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-399835 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (107.942632ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-399835] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-399835 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-399835 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.119550359s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-399835 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (115.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m52.910563275s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-399835 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-399835 status -o json: exit status 2 (509.164489ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-399835","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-399835
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-399835: (2.301087412s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (115.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-399835 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.971604738s)
--- PASS: TestNoKubernetes/serial/Start (7.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22000-300311/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-399835 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-399835 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.180264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-arm64 profile list: (16.187132565s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (15.952580387s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-399835
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-399835: (1.404391554s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-399835 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-399835 --driver=docker  --container-runtime=crio: (7.161483233s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-399835 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-399835 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.779276ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (305.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.627221773 start -p stopped-upgrade-467241 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.627221773 start -p stopped-upgrade-467241 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.416503222s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.627221773 -p stopped-upgrade-467241 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.627221773 -p stopped-upgrade-467241 stop: (1.246117338s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-467241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1129 10:08:24.012798  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:10:11.475735  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-467241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.185015924s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (305.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-467241
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-467241: (1.220817081s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestPause/serial/Start (79.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-377932 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1129 10:13:07.082234  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:13:14.541440  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:13:24.013452  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-377932 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.128668609s)
--- PASS: TestPause/serial/Start (79.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.21s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-377932 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-377932 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.19632425s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-151203 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-151203 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (180.617135ms)

                                                
                                                
-- stdout --
	* [false-151203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 10:15:26.694120  478805 out.go:360] Setting OutFile to fd 1 ...
	I1129 10:15:26.694262  478805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:15:26.694274  478805 out.go:374] Setting ErrFile to fd 2...
	I1129 10:15:26.694304  478805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 10:15:26.694596  478805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-300311/.minikube/bin
	I1129 10:15:26.695060  478805 out.go:368] Setting JSON to false
	I1129 10:15:26.696010  478805 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10676,"bootTime":1764400651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1129 10:15:26.696085  478805 start.go:143] virtualization:  
	I1129 10:15:26.699742  478805 out.go:179] * [false-151203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1129 10:15:26.702850  478805 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 10:15:26.702980  478805 notify.go:221] Checking for updates...
	I1129 10:15:26.708747  478805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 10:15:26.711637  478805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-300311/kubeconfig
	I1129 10:15:26.714608  478805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-300311/.minikube
	I1129 10:15:26.717528  478805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1129 10:15:26.720396  478805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 10:15:26.723856  478805 config.go:182] Loaded profile config "running-upgrade-493711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1129 10:15:26.724013  478805 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 10:15:26.746832  478805 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1129 10:15:26.746979  478805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 10:15:26.804069  478805 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-29 10:15:26.79484437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1129 10:15:26.804174  478805 docker.go:319] overlay module found
	I1129 10:15:26.807302  478805 out.go:179] * Using the docker driver based on user configuration
	I1129 10:15:26.810203  478805 start.go:309] selected driver: docker
	I1129 10:15:26.810222  478805 start.go:927] validating driver "docker" against <nil>
	I1129 10:15:26.810236  478805 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 10:15:26.813955  478805 out.go:203] 
	W1129 10:15:26.816850  478805 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1129 10:15:26.819751  478805 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-151203 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-151203" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 10:12:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-493711
contexts:
- context:
cluster: running-upgrade-493711
user: running-upgrade-493711
name: running-upgrade-493711
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-493711
user:
client-certificate: /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/running-upgrade-493711/client.crt
client-key: /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/running-upgrade-493711/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-151203

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-151203"

                                                
                                                
----------------------- debugLogs end: false-151203 [took: 3.576896427s] --------------------------------
helpers_test.go:175: Cleaning up "false-151203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-151203
--- PASS: TestNetworkPlugins/group/false (3.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.671521483s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-685516 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [50ae449b-ebf3-4617-bb6c-7e100cb4c66c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [50ae449b-ebf3-4617-bb6c-7e100cb4c66c] Running
E1129 10:18:24.012967  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003707932s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-685516 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-685516 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-685516 --alsologtostderr -v=3: (12.012132457s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516: exit status 7 (73.699754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-685516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-685516 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (56.494708684s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-685516 -n old-k8s-version-685516
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l7922" [2682d839-5219-4b13-8ea7-1a4463f4769f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004112324s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l7922" [2682d839-5219-4b13-8ea7-1a4463f4769f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003617489s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-685516 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-685516 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 10:20:11.475134  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.222390962s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-708011 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [75efd665-57a2-4237-baf4-78e41ceda948] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [75efd665-57a2-4237-baf4-78e41ceda948] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003952906s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-708011 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-708011 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-708011 --alsologtostderr -v=3: (12.076280099s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011: exit status 7 (138.112236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-708011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (57.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-708011 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.905516839s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-708011 -n embed-certs-708011
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (57.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.376186888s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7sxs9" [5c2df246-a248-427f-a965-5e7a96ae07a8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002822791s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7sxs9" [5c2df246-a248-427f-a965-5e7a96ae07a8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003766596s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-708011 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-708011 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-949993 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3fdb1ddf-7704-4f35-9630-eb7a372800cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3fdb1ddf-7704-4f35-9630-eb7a372800cd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003263943s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-949993 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.151189044s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-949993 --alsologtostderr -v=3
E1129 10:23:14.897662  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:14.904049  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:14.915587  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:14.936946  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:14.978596  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:15.060810  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:15.222673  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:15.543937  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:16.185792  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:17.467420  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:20.029685  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:24.013511  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-949993 --alsologtostderr -v=3: (12.110427414s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993: exit status 7 (86.852077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-949993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 10:23:25.151471  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:35.393287  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:23:55.874539  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-949993 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.186728153s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949993 -n no-preload-949993
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gzbxs" [e0aa7948-1813-4a7a-aee7-d516085b2f2a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003702989s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gzbxs" [e0aa7948-1813-4a7a-aee7-d516085b2f2a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003163704s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-949993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-194354 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6a6a6bef-631a-4303-be59-a408f7f63f1e] Pending
helpers_test.go:352: "busybox" [6a6a6bef-631a-4303-be59-a408f7f63f1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6a6a6bef-631a-4303-be59-a408f7f63f1e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004770569s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-194354 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-949993 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-194354 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-194354 --alsologtostderr -v=3: (12.133141087s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.686791265s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354: exit status 7 (81.624502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-194354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 10:25:11.475645  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-194354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.452032754s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-194354 -n default-k8s-diff-port-194354
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-156330 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-156330 --alsologtostderr -v=3: (2.683662509s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330: exit status 7 (79.591627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-156330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-156330 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.892927321s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-156330 -n newest-cni-156330
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fxsbl" [286ab319-2221-4ac1-9d62-92ceeb4e7c1d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003995134s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-156330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fxsbl" [286ab319-2221-4ac1-9d62-92ceeb4e7c1d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003989705s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-194354 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1129 10:25:58.758917  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.575932132s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-194354 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m27.229925796s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-151203 "pgrep -a kubelet"
I1129 10:27:22.121603  302182 config.go:182] Loaded profile config "auto-151203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-151203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-296nw" [c459d4a6-0a09-4fef-861a-1a4f109a7d53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-296nw" [c459d4a6-0a09-4fef-861a-1a4f109a7d53] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00407738s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-151203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-zbkvv" [773e556d-ab76-4cf1-bfe7-8e2af6a7ee59] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00332661s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-151203 "pgrep -a kubelet"
I1129 10:27:44.232749  302182 config.go:182] Loaded profile config "kindnet-151203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-151203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v4fjg" [37514b77-14a3-40d1-9db9-079fce73ffdc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v4fjg" [37514b77-14a3-40d1-9db9-079fce73ffdc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003861926s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m22.762401905s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-151203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1129 10:28:20.315068  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:28:24.013406  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/addons-937561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:28:40.798058  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:28:42.601000  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m8.43160937s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2cgpj" [b52145d5-9a58-4023-823e-4f3da75a11e4] Running
E1129 10:29:21.760184  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003583089s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-151203 "pgrep -a kubelet"
I1129 10:29:23.777522  302182 config.go:182] Loaded profile config "calico-151203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-151203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xfblw" [a0e3eeae-f86f-48f4-b8e5-212a5ab69a4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xfblw" [a0e3eeae-f86f-48f4-b8e5-212a5ab69a4d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005058682s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-151203 "pgrep -a kubelet"
E1129 10:29:28.574380  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:28.580734  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:28.598131  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:28.620928  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:28.662357  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:28.744156  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1129 10:29:28.744641  302182 config.go:182] Loaded profile config "custom-flannel-151203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-151203 replace --force -f testdata/netcat-deployment.yaml
E1129 10:29:28.906538  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qcmlp" [11cb9c5b-1ee5-4b3e-85cc-a51d1a76c811] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1129 10:29:29.230558  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:29.872725  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:31.154448  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:29:33.716257  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qcmlp" [11cb9c5b-1ee5-4b3e-85cc-a51d1a76c811] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004215745s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-151203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-151203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m34.666702227s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1129 10:30:09.562234  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:30:11.475325  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/functional-014829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:30:43.682277  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/no-preload-949993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 10:30:50.524238  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.196577363s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-kdmbv" [e8918249-ea1f-4367-b5f6-b778aac666d1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003165622s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-151203 "pgrep -a kubelet"
I1129 10:31:16.628147  302182 config.go:182] Loaded profile config "flannel-151203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-151203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x2ssj" [79fc3191-8483-4513-89a3-20cb7f948944] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x2ssj" [79fc3191-8483-4513-89a3-20cb7f948944] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.0034171s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-151203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-151203 "pgrep -a kubelet"
I1129 10:31:37.181538  302182 config.go:182] Loaded profile config "enable-default-cni-151203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-151203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mn5qr" [4e83a54a-2897-4910-8f70-4c0be5df7a13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mn5qr" [4e83a54a-2897-4910-8f70-4c0be5df7a13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003784836s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-151203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1129 10:32:12.446318  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/default-k8s-diff-port-194354/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-151203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.119381482s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-151203 "pgrep -a kubelet"
I1129 10:33:09.309189  302182 config.go:182] Loaded profile config "bridge-151203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-151203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sd8tc" [58ec931b-676a-446a-b692-4b4a8c64c308] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sd8tc" [58ec931b-676a-446a-b692-4b4a8c64c308] Running
E1129 10:33:14.897248  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/old-k8s-version-685516/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003316602s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-151203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1129 10:33:18.810267  302182 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/kindnet-151203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-151203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.48s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-753424 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-753424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-753424
--- SKIP: TestDownloadOnlyKic (0.48s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-259491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-259491
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-151203 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-151203" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 10:12:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-493711
contexts:
- context:
cluster: running-upgrade-493711
user: running-upgrade-493711
name: running-upgrade-493711
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-493711
user:
client-certificate: /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/running-upgrade-493711/client.crt
client-key: /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/running-upgrade-493711/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-151203

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-151203"

                                                
                                                
----------------------- debugLogs end: kubenet-151203 [took: 3.610614317s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-151203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-151203
--- SKIP: TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-151203 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-151203" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-300311/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 10:12:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-493711
contexts:
- context:
cluster: running-upgrade-493711
user: running-upgrade-493711
name: running-upgrade-493711
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-493711
user:
client-certificate: /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/running-upgrade-493711/client.crt
client-key: /home/jenkins/minikube-integration/22000-300311/.minikube/profiles/running-upgrade-493711/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-151203

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-151203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151203"

                                                
                                                
----------------------- debugLogs end: cilium-151203 [took: 3.973597615s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-151203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-151203
--- SKIP: TestNetworkPlugins/group/cilium (4.13s)

                                                
                                    
Copied to clipboard